Monday 10 December 2012

Initialization Lists in C++

We all know, what happens when a class object is created; its Constructor is called. And, ofcorse that's not the end, it's base class constructor is also called. By default, the constructors invoked are the default ("no-argument") constructors. Moreover, all of these constructors are called before the class's own constructor is called.

So, with below piece of code,
class A 

public: 
     A()
     { 
          std::cout << "A's constructor" << std::endl; 
     } 
}; 

class B: public A
public: 
     B()
     { 
          std::cout << "B's constructor" << std::endl; 
     }
}; 

int main() 
{
     B obj1;
     return 0;
}
The object "obj1" is constructed in two stages: first, the A's constructor is invoked and then the B's constructor is invoked. The output of the above program will be to indicate that A's constructor is called first, followed by B's constructor. We all know this.

Why do this? There are a few reasons. First, each class should need to initialize things that belong to it, not things that belong to other classes. So a child class should hand off the work of constructing the portion of it that belongs to the parent class. Second, the child class may depend on these fields when initializing its own fields; therefore, the constructor needs to be called before the child class's constructor runs. In addition, all of the objects that belong to the class should be initialized so that the constructor can use them if it needs to.

Initialization List:
But what if you have a parent class that needs to take arguments to its constructor?
This is where initialization lists come into play. An initialization list immediately follows the constructor's signature, separated by a colon(:). Initialization List is another or a better approach for initializing the member variables (and base class objects) of a class upon construction of an instance of it's own.

Let's modify above code to demonstrate initialization list,

class A 

public: 
     A(int x)
     { 
          std::cout << "A's constructor is called with x = "<< x << std::endl;
     } 
}; 

class B: public A
public: 
     B() : A(100) //initialization list - construct A part of B
     { 
          std::cout << "B's constructor" << std::endl; 
     }
}; 

This is how you can construct and initialize base part of the child construct.

You can set-up order of constructors need to be called, before child constructor, separated by comma.
Like this,
class B: public A, public C
public: 
     B() : A(100), C(200)
     { 
          std::cout << "B's constructor" << std::endl; 
     }
}; 

The above call will be in sequential, as before B constructor, A part of B is initialized and then C part of B is initialized and after that only B's constructor is executed.

Here are interesting factors about initialization list:

> Initialization List is used to initialize both user defined data types (like embedded object of a class) and also primitive/built-in data types (like int, char).
Yes basically it is possible, look at below example,

class A 

public: 
     A(int x)
     { 
          std::cout << "A's constructor is called with x = "<< x << std::endl;
     } 
}; 

class B: public A
{
private:
     int x;
     int y;
public: 
     B() : A(100), x(25), y(50)
     { 
          std::cout << "B's constructor" << std::endl; 
     }
}; 

Above code initializes A part of B, then initializes two member variables 'x' and 'y' with constant values, before calling B's own constructor.

> Initialization List can appear irrespective of the place the constructor is defined.

> Initializing the member variables in the Initialization List is better than initializing them inside the body of the constructor of the class.

> Data members are initialized in the order they are declared, regardless of the order of their initialization.

> It is mandatory to initialize Reference Data Member in an Initialization List because it can not exist without being initialized.

> It is mandatory to initialize Constant Data Member in an Initialization List otherwise it would be constrructed with some junk values and we cannot initialize it later anywhere else.

> It is mandatory to construct and initialize, embedded class objects/base class objects in case of inheritance, in an Initialization List, if they do not themselves have a zero-argument/default constructor provided.

Great isn't it !!!!

Tuesday 4 December 2012

Virtual Inheritance and Diamond Problem in C++

Multiple inheritance in C++ is a powerful, but tricky tool, that often leads to problems if not used carefully. One of the major problem that arises due to multiple inheritance is the diamond problem.

The "diamond problem" (sometimes referred to as the "deadly diamond of death") is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If D calls a method defined in A (and does not override the method), and B and C have overridden that method differently, then from which class does it inherit: B, or C?

                         Class A
                       ____|____
                      /                 \  
               Class B         Class C 
                      \________ /    
                               |
                         Class D

Classic Example:

class Animal {
 public:
  virtual void eat();
};
 
class Mammal : public Animal {
 public:
  virtual void breathe();
};
 
class WingedAnimal : public Animal {
 public:
  virtual void flap();
};
 
// A bat is a winged mammal
class Bat : public Mammal, public WingedAnimal {
};
 
Bat bat;

As declared above, a call to bat.eat() is ambiguous because there are two Animal (indirect) base classes in Bat, so any Bat object has two different Animal base class sub-objects. So an attempt to directly bind a reference to the Animal sub-object of a Bat object would fail, since the binding is inherently ambiguous:

Bat b;
Animal &a = b; // error: which Animal subobject should a Bat cast into, 
               // a Mammal::Animal or a WingedAnimal::Animal?

Solution is in Virtual Inheritance:

And our re-implemented class looks like this,

class Animal {
 public:
  virtual void eat();
};
 
// Two classes virtually inheriting Animal:
class Mammal : public virtual Animal {
 public:
  virtual void breathe();
};
 
class WingedAnimal : public virtual Animal {
 public:
  virtual void flap();
};
 
// A bat is still a winged mammal
class Bat : public Mammal, public WingedAnimal {
};


The Animal portion of Bat::WingedAnimal is now the same Animal instance as the one used by Bat::Mammal, which is to say that a Bat has only one, shared, Animal instance in its representation and so a call to Bat::eat() is unambiguous. Additionally, a direct cast from Bat to Animal is also unambiguous, now that there exists only one Animal instance which Bat could be converted to.

This is implemented by providing Mammal and WingedAnimal with a vtable pointer (or "vpointer") since the memory offset between the beginning of a Mammal and of its Animal part is unknown until runtime. Thus Bat becomes (vpointer, Mammal, vpointer, WingedAnimal, Bat, Animal). There are two vtable pointers, one per inheritance hierarchy that virtually inherits Animal. In this example, one for Mammal and one for WingedAnimal. The object size has therefore increased by two pointers, but now there is only one Animal and no ambiguity. All objects of type Bat will have the same vpointers, but each Bat object will contain its own unique Animal object. If another class inherits from Mammal, such as Squirrel, then the vpointer in the Mammal object in a Squirrelwill be different from the vpointer in the Mammal object in a Bat, although they can still be essentially the same in the special case that the Squirrel part of the object has the same size as the Batpart, because then the distance from the Mammal to the Animal part is the same. The vtables are not really the same, but all essential information in them (the distance) is.

Reference variable in c++

C++ references allow you to create a second name for the a variable that you can use to read or modify the original data stored in that variable. While this may not sound appealing at first, what this means is that when you declare a reference and assign it a variable, it will allow you to treat the reference exactly as though it were the original variable for the purpose of accessing and modifying the value of the original variable--even if the second name (the reference) is located within a different scope. This means, for instance, that if you make your function arguments references, and you will effectively have a way to change the original data passed into the function. This is quite different from how C++ normally works, where you have arguments to a function copied into new variables. It also allows you to dramatically reduce the amount of copying that takes place behind the scenes, both with functions and in other areas of C++, like catch clauses.

Declaration is,
int& ref = <some_int_variable>;

For Example,
int x; 
int& foo = x; // foo is now a reference to x so this sets x to 56 
foo = 56; 
std::cout << x <<std::endl;

What are the difference between Reference Variable and conventional pointers?

> A pointer can be re-assigned any number of times while a reference can not be reassigned after initialization.
> A pointer can point to NULL while reference can never point to NULL
> You can't take the address of a reference like you can with pointers
> There's no "reference arithmetic" (but you can take the address of an object pointed by a reference and do pointer arithmetic on it as in &obj + 5).

End Line: Beware of references to dynamically allocated memory. One problem is that when you use references, it's not clear that the memory backing the reference needs to be deallocated--it usually doesn't, after all. This can be fine when you're passing data into a function since the function would generally not be responsible for de-allocating the memory anyway. 

On the other hand, if you return a reference to dynamically allocated memory, then you're asking for trouble since it won't be clear that there is something that needs to be cleaned up by the function caller.

Monday 3 December 2012

Compile Qt Applications with custom Makefile

Hi folks, this is a quick post to "Compile Qt application using your own Makefile".
[ I am sure there is no link which explains about this !!! ]

Stuffs I have used:
[*] Qt 4.8.1 compiled and installed in my machine [how to do that?], with Visual C++ 10 compiler.
[*] Windows-7 operating system, 64-Bit.
[*] Knowledge about writing Makefile [ref] [I am not discussing how to write Makefiles here with this post]

So, our first step is to have one Qt application. For testing purpose you can use mine  [download-in-zip].
In my test application I have used all the stuffs that a common Qt developer needs to, like UI components, CPP files, H files and QRC files.
I will be referring to same test application shared above.

Before I start discussion, let me put all the codes in here first,


/* ----------------------------------------- */
/* File Name: main.cpp        */
/* ----------------------------------------- */
  1. #include <QtGui/QApplication>
  2. #include "mainwindow.h"

  3. int main(int argc, char *argv[])
  4. {
  5.     QApplication a(argc, argv);
  6.     MainWindow w;
  7.     w.show();    
  8.     return a.exec();
  9. }

/* ----------------------------------------- */
/* File Name: mainwindow.cpp        */
/* ----------------------------------------- */
  1. #include "mainwindow.h"
  2. #include "ui_mainwindow.h"
  3. #include <QPixmap>

  4. MainWindow::MainWindow(QWidget *parent) :
  5.     QMainWindow(parent),
  6.     ui(new Ui::MainWindow)
  7. {
  8.     ui->setupUi(this);
  9. }

  10. MainWindow::~MainWindow()
  11. {
  12.     delete ui;
  13. }

  14. void MainWindow::on_pushButtonShow_clicked()
  15. {
  16.     QPixmap pixmap(":/Image/butterfly.jpg");
  17.     ui->imageLabel->setPixmap(pixmap);
  18. }

/* ----------------------------------------- */
/* File Name: mainwindow.h        */
/* ----------------------------------------- */
  1. #ifndef MAINWINDOW_H
  2. #define MAINWINDOW_H

  3. #include <QMainWindow>

  4. namespace Ui {
  5. class MainWindow;
  6. }

  7. class MainWindow : public QMainWindow
  8. {
  9.     Q_OBJECT
  10.     
  11. public:
  12.     explicit MainWindow(QWidget *parent = 0);
  13.     ~MainWindow();
  14.     
  15. private slots:
  16.     void on_pushButtonShow_clicked();

  17. private:
  18.     Ui::MainWindow *ui;
  19. };

  20. #endif // MAINWINDOW_H

And UI file, mainwindow.ui has one QLabel named as "imageLabel" and one QPushButton named, "pushButtonShow".

And QRC file, "TestRsrc.qrc" has butterfly.jpg under one prefix name "/Image".

Now, let's look into Custom Makefiles.

I have divided Makefiles into 2 different parts, with respect to their operation.
First Makefile, under source main directory, defines rules for building Object files.
Second Makefile, under build directory defines rules for linking object files with Qt shared libraries, in general resolves symbol definition in objects.
And, Config file under build directory, as name indicates, has some of the common configuration definitions for compilation.
Below are the Makefiles and Config files,

  1. ####### Config file

  2. CXX           = cl

  3. QT_DEFINES  = -DUNICODE -DQT_LARGEFILE_SUPPORT -DQT_DLL -DQT_GUI_LIB -DQT_CORE_LIB -DQT_HAVE_MMX -DQT_HAVE_3DNOW -DQT_HAVE_SSE -DQT_HAVE_MMXEXT -DQT_HAVE_SSE2 -DQT_THREAD_SUPPORT 

  4. DEFINES       = $(QT_DEFINES) -D"WIN32" -D"WINVER=0x0601" -DINTERNAL -DMEASURE_TIME=0 -DMEASURE_POWER=0 

  5. CXXFLAGS      = -nologo -Zc:wchar_t- -GR -EHsc -MDd -W3 -w34100 -w34189 /Od $(DEFINES) -c

  6. QT_PATH  = C:\QtSDK\Desktop\Qt\4.8.0\msvc2010

  7. INCPATH       = -I"$(QT_PATH)\include" -I"$(QT_PATH)\include\QtCore" -I"$(QT_PATH)\include\QtGui" -I"." -I"..\..\Sources" -I"." -I"$(QT_PATH)\mkspecs\win32-msvc2010" 

  8. LNK           = link.exe

  9. LFLAGS        = /NOLOGO /DEBUG /SUBSYSTEM:WINDOWS /INCREMENTAL:NO 

  10. QT_LIBS       = $(QT_PATH)\lib\qtmaind.lib $(QT_PATH)\lib\QtGuid4.lib $(QT_PATH)\lib\QtCored4.lib 

  11. ADD_LIB  = 

  12. MFLAGS  = -D_MSC_VER=1500 -DWIN32

  13. QMOC  = $(QT_PATH)\bin\moc.exe

  14. QRCC  = $(QT_PATH)\bin\rcc.exe

  15. ####### Output format

  16. PATH_SEP  = ^\
  17. SRCDIR  = ..$(PATH_SEP)
  18. OBJDIR    = .$(PATH_SEP)
  19. OBJECT_SUFIX  = obj
  20. DESTDIR       = .$(PATH_SEP)
  21. TARGET        = TestApp
  22. TARGET_SUFIX  = exe
  23. OUTPUT  = TestApp.$(TARGET_SUFIX)

Makefile, which resides under Build directory,

  1. ####### Makefile for Linking apps #######
  2. ####### Reside under Build directory #######
  3. ####### Basically resolves symbol definitions for objects #######

  4. ####### Includes

  5. include Config
  6. include ..\Makefile

  7. ####### Main Build rules

  8. all: $(OBJECTS) $(TARGET)

  9. $(TARGET): $(OBJECTS) 
  10. $(LNK) $(OBJECTS) $(ADD_LIB) $(QT_LIBS) $(LFLAGS) /out:$(OUTPUT)
  11. clean:
  12. del *.$(OBJECT_SUFIX) *.$(TARGET_SUFIX) moc_*.cpp qrc_*.cpp *.pdb *.manifest 

Makefile, which resides under main source directory,

  1. ####### Makefile for building objects #######
  2. ####### Reside under main source directory #######
  3. ####### Rules and definitions for building objects and other dependencies #######

  4. default:all

  5. ####### Source Files

  6. SOURCES       = $(SRCDIR)main.cpp \
  7. $(SRCDIR)mainwindow.cpp

  8. MOC_SRC  = moc_mainwindow.cpp 
  9. QRC_SRC  = qrc_TestRsrc.cpp
  10. ####### Object Files
  11. CORE_OBJ      = main.$(OBJECT_SUFIX) \
  12. mainwindow.$(OBJECT_SUFIX)

  13. MOC_OBJ  = moc_mainwindow.$(OBJECT_SUFIX)
  14. QRC_OBJ  = qrc_TestRsrc.$(OBJECT_SUFIX)

  15. OBJECTS  = $(MOC_OBJ) $(QRC_OBJ) $(CORE_OBJ)

  16. ####### MOC Source Build rules

  17. moc_mainwindow.cpp: $(SRCDIR)mainwindow.h
  18. $(QMOC) $(DEFINES) $(INCPATH) $(MFLAGS) $(SRCDIR)mainwindow.h -o moc_mainwindow.cpp

  19. ####### QRC Source Build rules

  20. qrc_TestRsrc.cpp: $(SRCDIR)TestRsrc.qrc
  21. $(QRCC) -name TestRsrc $(SRCDIR)TestRsrc.qrc -o qrc_TestRsrc.cpp

  22. ####### Core Object Build rules

  23. main.$(OBJECT_SUFIX): $(SRCDIR)main.cpp
  24. $(CXX) $(SRCDIR)main.cpp $(INCPATH) $(CXXFLAGS) main.$(OBJECT_SUFIX)

  25. mainwindow.$(OBJECT_SUFIX): $(SRCDIR)mainwindow.cpp
  26. $(CXX) $(SRCDIR)mainwindow.cpp $(INCPATH) $(CXXFLAGS) mainwindow.$(OBJECT_SUFIX)
  27. ####### MOC Object Build rules

  28. moc_mainwindow.$(OBJECT_SUFIX): moc_mainwindow.cpp
  29. $(CXX) moc_mainwindow.cpp $(INCPATH) $(CXXFLAGS) moc_mainwindow.$(OBJECT_SUFIX)
  30. ####### QRC Object Build rules

  31. qrc_TestRsrc.$(OBJECT_SUFIX): qrc_TestRsrc.cpp
  32. $(CXX) qrc_TestRsrc.cpp $(INCPATH) $(CXXFLAGS) qrc_TestRsrc.$(OBJECT_SUFIX)

If you have used command line compilation for VC++, by this time you must have understood above Config and Makefiles. I cannot explain command line compilation and other basic stuffs here, but will try to clarify as much as possible.

If you look into Config file,
Line number 3: says compiler we going to use is CL (VC++ command line compiler).
Line number 5: gives some of the Qt standard definitions needed for compilation. -D<definition> or /D<definition> is a flag to define them in CL.
Line number 7: is custom definitions or user definitions. If you have any specific pre-compilation definitions, then you can define them here with flag -D or /D.
Line number 9: is standard CPP and Qt flags used to compile CPP application in CL.
Line number 11: says compiler where to refer for Qt related stuffs compilation. This is where my Qt was installed, you have to change this path to your Qt installation path before you compile.
Line Number 13: is reference path for include files you have used in your application. If you have included any specific files in your program other than mentioned here, you have to include them here.
Line number 15 and 17: says our linking application is link.exe and some standard flags for linking.
Line number 19: says Qt shared libraries (lib files) required to resolve symbols and definitions used in application. If you have referenced any other Qt libraries, for example QtNetwork, you have to include that library here.
Line number 21: is for further development, which gives you option to include and link additional libraries or 3rd party libraries you have referenced in your application.
Line number 23: is flag for generating MOC objects.
Line number 25 and 27: this defines path for moc and rcc apps, which is required to build moc objects and objects out from q-resource files. This apps are part of your Qt installation.
Line 31 to 38 define some of the relative path for where to find elements and output format.

The Makefile under build directory, is quite simple.
It includes definitions and rules from Config and other Makefile. It has 3 rules viz., which builds Objects first and then Target exe will be linked with libs and objects. These are the general Makefile rules I have used and nothing much to say about them.

Now, let me brief some about Makefile under main source directory.
This Makefile contains basic rules to build objects out from standard CPP files, MOC files and QRC files.
Remember steps to build the objects,
> MOC objects and QRC objects should be built before you build CPP objects.
> Header files (.h files), having Q_OBJECT macro definition in their class, needs only be generated MOC files. Other header files can be treated as normal CPP headers.
> Before you build MOC and QRC objects, you have to use moc.exe and rcc.exe apps to build their corresponding CPP files.
So look into Makefile, Line number 9, 12 and 14 says we have 4 CPP source files need to be compiled into Objects.
Line number 25 says we have 3 types of objects, need to be built in order specified.
If you look into Moc source build rules, that is line number 29 and 30, it uses moc.exe app to generate moc_mainwindow.cpp source file corresponding header "mainwindow.h". To know more about building Moc objects in Qt, refer here.
Line number 34 and 35 says standard rules to build QRC source file.
Refer naming conventions for building Moc and QRC files; they have moc_<filename>.cpp and qrc_<filename>.cpp; and their corresponding object files will be moc_<filename>.obj and qrc_<filename>.obj after built. This is how it has to be, and follow the same convention for better understanding.
At this point we have all 4 source files ready to build into objects. Use CPP standard method to build all source files into objects. Don't forget to specify proper paths for your build.

That is all about your custom Makefiles, used to build your Qt application.

If you have downloaded my sample application, which I have shared above, try to build it with Visual C++ command line.
Unzip the file into safe location (say c:\testapp)
Open Config file, inside build directory, with any text editor and change "Qt_PATH" definition to corresponding Qt installation path in your machine.
Go to Start > All Programs > Microsoft Visual Studio 2010 > Visual Studio Tools, and open "Visual Studio Command Prompt 2010".
Navigate to the location c:\testapp and then navigate inside "build" directory.
Give command "nmake all"
This should build your exe. In order to run the exe, you have to keep corresponding runtime DLLs inside the directory where exe is executing from. So copy two files QtCored4.dll and QtGuid4.dll from your Qt installation path into build directory and run the application.
To clen the build, execute command "nmake clean".

Hope this is helpful.

Wednesday 24 October 2012

Build Qt source 64-Bit from Visual Studio 2010


This is a quick post, how to build Qt [open]source 64-Bit, with Visual Studio 2010 Command Line.

Stuffs I have used to build this,
* Visual Studio 2010 : Visual Studio x64 Win64 Command Prompt
* Qt Source Code version: qt-everywhere-opensource-src-4.8.1
* Active Perl 5.16.1 64-Bit
* Windows 7 64-Bit

Note: If you are building Qt source 4.8.0 and above, you must install Perl in your system.

Download below piece of codes,
1. Download Qt Source code from >> here <<
I have chosen "qt-everywhere-opensource-src-4.8.1.zip" for my build.

2. Download and install Active Perl 64-bit from >> here << 
I have chosen Active Perl version 5.16.1 - 64-Bit build.

Now, let's start building.

>> Unzip the Qt source code qt-everywhere-opensource-src-4.8.1.zip into the directory where you would like to install Qt.
In my case I have kept whole source code in C:\Qt64\4.8.1\ directory.

>> Let's avoid setting PATH manually for Qt source. Let's have a batch file to do that. Create a new file and rename it as QtEnvBat.bat
Copy and paste below code into the batch file and save it,
@echo off
set QTDIR="C:\Qt64\4.8.1"
set QMAKESPEC="win32-msvc2010"
set PATH=%PATH%;"C:\Qt64\4.8.1\bin"
echo Successfully set all Qt Env variables.

where "C:\Qt64\4.8.1" is a directory where my Qt source code is present.
Place this batch file inside C:\Qt64\4.8.1\ directory.

>> I will consider, you have successfully installed Active Perl in your machine.

>> Now open Visual Studio Command Prompt. Goto Start > All Programs > Microsoft Visual Studio 2010 > Visual Studio Tools > Visual Studio x64 Win64 Command Prompt
Remember, right click and choose "Run as Administrator".

>> From VS command prompt, navigate to C:\Qt64\4.8.1\ with CD command,

cd c:\Qt64\4.8.1\

Now run the batch file you have prepared just before.

QtEnvBat.bat

Next, execute below command,

configure -debug-and-release -opensource -platform win32-msvc2010

sit back and have a cup of coffee, and some snacks too, because this may take few minutes to couple of hours !!!

Once above command is successful, Qt source code is successfully configured and ready to build now.

Next is final touch-up and most important command,

nmake all

allow your machine to build whole Qt source code. This is expected to take minimum of 4-5 hrs for an ideal machine.

Note here, some people prefer "Jom" builder over "nmake", but I prefer VS2010 comand prompt as it is sophisticated and powerful yet.
However, you can download latest Jom from here, and give a try.

Hope this post helped.

updated on - 30/10/2012

once you are done building, you need to install it. Use below command,

nmake install

Please note, even internally nmake install is copying build files to release directory, it is important you run this command if you are deploying your Qt resources into client machine.

Thursday 13 September 2012

Firefox OS - new gen stuffs to know

Curtesy: Rawkes [Magical about Firefox OS]


Firefox OS screenshots

Firefox OS is a new mobile operating system developed by Mozilla's Boot to Gecko (B2G) project. It uses a Linux kernel and boots into a Gecko-based runtime engine, which lets users run applications developed entirely using HTML, JavaScript, and other open Web application APIs.

In short, Firefox OS is about taking the technologies behind the Web, like JavaScript, and using them to produce an entire mobile operating system. Just let that sink in for a moment — it's a mobile OS powered by JavaScript!

To do this, a slightly-customised version of Gecko (the engine behind Firefox) has been created that introduces the new JavaScript APIs necessary to create a phone-like experience. This includes things like WebTelephony to make phone calls, WebSMS to send text messages, and the Vibration API to, well, vibrate things.

But Firefox OS is much more than the latest Web technologies being used in crazy ways, as awesome as that is, it's also a combination of many other projects at Mozilla into a single vision — the Web as a platform. Some of these projects include our Open Web Apps initiative and Persona, our solution to identity and logins on the Web (formally known as BrowserID). It's absolutely fascinating to see so many different projects at Mozilla coalesce into a single, coherent vision.

The two major reasons for "why Firefox OS - another mobile OS again?" are that Firefox OS fills a gap in the mobile market, and that it provides an alternative to the current proprietary and restrictive mobile landscape. So cool right !!

Mozilla's mission since its outset in 1998, first as a software project and later as a foundation and company, has been to provide open technology that challenges a dominant corporate product.

Mozilla is attempting to replicate its success with Firefox, in which it stormed the browser market and showed users that there is an alternative, one that lets them be in control of how they use the Web

This time, it's the mobile Web that's threatened, not by Microsoft but by Apple and Google, the leading smartphone platforms. With their native apps, locked-down platforms, proprietary software stores, and capricious developer rules, Apple and Google are making Web technology less relevant.

On mobile, one of the main areas that needs improving is application portability…

For all the excitement around mobile apps, they seem a step backward in one respect: they tie users to a particular operating system and devices that support it. The Web, by contrast, evolved so that content is experienced much the same way on any hardware.
Mozilla, maker of the Firefox Web browser, is determined to make the same thing true for smartphones.

What Firefox OS aims to do here is to use the native everywhere-ness of the Web to provide a platform that allows applications to be enjoyed on a mobile device, a desktop computer, a tablet, or anywhere else that has access to a browser.

Read full article here.

Thursday 9 August 2012

PC Virtualisation - compared

Curtesy: ZDNet [Virtualisation suites compared]


ProductProsConsBottom line

logo-citrix


Citrix XenServer 6.0.201
  • Easy to install
  • Greater support for industry-standard device drivers
  • No extra charge for most high-end functionality
  • Single console for all editions
  • Up to 16 vCPUs and 128GB per VM
  • Support via forums and the XenSource community.
  • A Windows application only, not a web console
  • Supported tools are not as advanced as VMware.
XenServer has the most features of any free hypervisor, is easiest to install and manage, has excellent performance and VMs support up to 16 vCPUs.

logo-win2008server


Microsoft Windows Server 2008 R2 SP1 Hyper-V
  • Best integration with Microsoft infrastructure
  • A strong set of enterprise features, due to be improved soon
  • Strong development focus from Microsoft.
  • Large cluster management can be more difficult
  • Only four vCPUs and 64GB of RAM per VM.
It's still not as mature as VMware or XenServer, but it has a lot of momentum. Integration in a Windows environment will make this a strong hypervisor for those running mainly Microsoft.



VMware vSphere
ESXi 5
  • Easy to install and manage from vSphere Client
  • Many advanced features are available
  • Good support via forums
  • Many certified engineers are available in the workforce
  • Tools are available to assist in the migration to virtual.
  • Limited in terms of managing the virtual infrastructure
  • Requires upgrade to vCenter server for advanced features
  • Many advanced features are only available with additional plug-ins.
ESXi 5 is the market leader, which shows in the maturity of its product, the polish of its console and the vast number of support tools available. But it comes at a cost.

logo-oracle


Oracle VirtualBox 4.1.18
  • Free, open source and small 20MB file size
  • Stable with very good usability
  • Can boot from .iso and simplified file sharing
  • Runs on and hosts a very wide variety of OSes.
  • Limited USB support
  • Less refined than more established competitors
  • Not all host ports are available under the VM
  • Number of guests limited by PC host
  • Doesn't support drag and drop.
VirtualBox is an inexpensive path for an individual or SMB to explore virtualisation. If your needs extend past VirtualBox running a production server and web server on a pair of VMs on a single server, you'll probably want to use another product.

Read original post here,

Wednesday 8 August 2012

Essential Commands in Ubuntu [2]


Hi friends, let's discuss some extra works which will help you learn more about Ubuntu Terminal.

Let's Start with a Text Editor:
I will discuss two text editors here, where one is common/general text editor in all Linux flavors, VI editor and another is sophisticated very popular Ubuntu text editor GEdit
Let's take practical example and discuss. 
Scenario is, let me create a text file My_File.txt at work location /home/kiran/. Choose any location in your system. So, open your Terminal first,
VI Editor:
The VI Editor is famous for its lite weight. The higher version of VI is VIM, with good visuals. 
In Terminal, I have navigated to cd /home/kiran/. Type vi My_File.txt and press enter, the VI editor opens My_File.txt in Terminal.
There are 2 modes in VI, one is command mode and another is text mode. 
In command mode, the characters you type are considered as commands, and in counter part, in text mode, what ever you type will be considered as plain text and will be updated into text file. 
On start-up VI will be in command mode. Remember, to execute any command in VI, like insert, save text, delete line, delete character, quit and many more, you should be in command mode of operation.
Once started, you are in command mode, to start inserting character press i from keyboard. The VI turns to text mode and accepts your characters. Type some texts. 
Once you are finished, now to do any operation you should go back to command mode. So to go back to command mode from text mode, press ESC from keyboard. Now VI in command mode of operation. To save and quit what ever you have entered, press ESC key and type :wq 
If you want to force quit (by discarding changes) press ESC BI key and type :q! from your keyboard. 
Notice colon (:) before the commands. Check some of the most popular VI commands in here, and to know more about VI check here.
GEdit:
GEdit is my favorite text editor in Ubuntu. It is a UTF-8 compatible text editor for GNOME desktop. It is very simple yet powerful tool for editing source codes, HTML, Scripts, all most all text editing operations. One of the most powerful feature of GEdit has Syntax Highlighting for various program code and text markup formats, means, as you write a keyword of any programming language, its syntax will be recognized automatically and highlighted with separate color !. Another beautiful feature about GEdit is, you can open multiple files in different tabs. So to simply start, type gedit My_File.txt and press enter, a new GEditor window will be opened. If the file My_File.txt already exist, it will be opened for editing and if doesn't exist it will be created and opened for editing. Type anything, to save press save button or use Ctrl+s keys from keyboard, and close the file.
Which editor is better to use? Answer is left to you. Some prefer VI and others, GEdit. Well, there are advantages if you learn VI commands, if at all you are made to work on non-Ubuntu Linux, then don't search for a text editor there, remember VI is available in all Linux flavors, just start using it.

Administrator Login
Let's see how to login as root in Ubuntu. root is an Administrator in Linux, which has got control over all the operations (for time being, remember, if any work is been denied the permission for you as Normal user, it means you have to execute that operation as root user).
So to login as root user in Ubuntu Terminal, type sudo su and press enter. Terminal asks password to login as root, enter administrator password which you gave during Ubuntu installation. Are you noticing, the password you type are Not Visible (but in actual terminal is receiving the password), just go on and once you finish, hit enter. If success, you will enter into Terminal as root user. Notice the change in prompt with # from previous prompt $, which indicates the operation you carry out next will be done as root.
Note: Next in this blog, or may be any where, wherever you find $ followed by a command, then it means you need not have root login to execute that command. When you notice # it means you should have root permission to execute that command presiding it. However # and $ are NOT a part of your command you type.
[*] Well, the word su stands for Super User, i.e. the root in Linux. The word sudo has special task to do with Ubuntu. The sudo is a command that allows user to run another command or program with security privileges of Super User. As I said, there are operations in Terminal which you cannot carryout as normal user, which needs root privileges to execute. The sudo is that command which gives you a temporary permission as root to execute some of the operations. As in case sudo su, 'sudo' allows you to execute su command (which has privilege of root) by giving you temporary permission as root.
[*] The sudo is abbreviation for Substitute User and DO the operation; some people also say it Super User and DO.
[*] Let me tell you a simple technique, you need not worry about what command or operation uses 'sudo' prefix before execution, you just try executing the commands as normal user. If command gives you any error, try putting sudo prefix at the beginning of that command. If still command doesn't execute, login as root and try it; this time you should be successful.
[*] In special cases, if you fail to execute the command even as root, remember, either the command has no meaning here in Ubuntu (means the command may be proprietary program of different Linux flavor) or There may not be proper dependency libraries installed in machine to execute that command (we will learn how to fix this issue in later sections).
[*] Once you type exit, when you loggedin as root, Terminal will exit you first from root and then again you have to type exit to close Terminal
[*] It is true, Linux commands are derived from Unix platform. But there are some proprietary commands applicable to particular flavors of Linux, like Ubuntu has its own commands which may not work in RedHat, similarly Redhat may use some proprietary libraries and packages which may not be applicable to Debian based computer operating systems.

Probably below contents, library installation or packages I am discussing here are mainly applicable to only Ubuntu and some other Debian based systems. I assume that you have Internet working in your Ubuntu system. I will post some of the methods to connect Internet in Ubuntu in next post. You should atleast have a standard external modem connected to your USB Port, a mobile used as Modem will also fine. Broadband will surely do great - just connect and surf. Believe me if you thinking about Telephone dial-up connection, till today with all kind of experiments I myself couldn't succeed; If you found any solution please let me know!

What's 'apt-get' gets?
[*] Once you have connected (working) Internet in your machine, it's very easy to install and update your libraries in Ubuntu. The command sudo apt-get does this operation for you.
[*] Linux, it mainly gives a control over source code of an application, so user could compile the source and use its executable when necessary. To compile such source codes you must have Libraries installed in your machine (obviously). 
[*] Not only for compilation process but also many of the work in Linux intern looks for their dependency libraries. To say it in simple way, you type a command on Terminal and press enter to execute it, but, that command should be known to Ubuntu and must have some meanings; that information will be held by shell library. So, cd, mkdir, sudo, pwd etc. all shell commands are set of library files which have got meaningful definition to execute their tasks. 
[*] Once the library file is missing in your machine, or the library you have installed is in different path which Terminal doesn't know, then that particular command will result into 'command not found error'. 
[*] Since Ubuntu belongs to Debian family, it uses Debian Package Managers. Package Managers are again set of Libraries (or an application you can say) which helps to manage other packages or software's in Linux. 
[*] Ubuntu seeks Debian file formats for installation. Debian files are set of formatted libraries which ends with file extension .deb. These deb files are usually called executable installers in Ubuntu, same like Setup.exe files in Windows. In RedHat and Fedora Linux, it is .rpm files which are most famous in Linux, where rpm stands for RedHat Package Manager. You cannot install rpm files in Ubuntu directly, because Debian platforms don't recognize rpm files as native standard packages. However there is a 3rd party application called alien which helps in installing rpm files in Debian based systems.
[*] One more important information to be remember is, each Library in Linux intern may or may not depend on other Libraries or files which, in overall, we call Dependencies. It is very important to know dependencies of a Library, because Linux needs all dependencies must have present installed in your machine before you install any other package or your own library; otherwise it will not allow you to install them. If library-A depends on library-B, then it is important that B must be installed and ready in your machine before you install library-A. One library may depend on several other libraries, which all you need to take care. If you downloaded any deb file, right click on it and see its properties, you will find what other libraries it depends on.
[*] Remember, each library in linux has its own release version number. Some library may depend on particular version of another dependency, which also you need to take care. Some times it also happens like Lib-A depends on Lib-B and Lib-B intern depends on Lib-A, in this case it's a dead-lock position, which one to install first? Solving dependencies is a major issue if you are trying to download deb files from one machine and carry it to your home machine for installation; bcoz deb files doesn't hold account for it's dependencies, it assumes you have dependencies already installed in your system.

You worried a little right? need not. There is a easy solution, as easy as simple command execution. The command sudo apt-get will take all your installation as well as your dependencies burden and works smoothly for you.
[*] The apt-get is a powerful command used with Ubuntu's Advanced Packaging Tool performing a supreme functions like installing new packages, upgrading existing packages, removing unnecessary packages, updating package list index and even at extreme upgrading entire Ubuntu system itself. The main advantage of apt-get is its ease of use at terminal with simple internet connected in system. The apt-get connects to internet and looks for the packages mentioned in its argument for download and installs automatically at appropriate location. The apt-get automatically takes care about all dependencies of package you are installing. If package A you are installing, depends on package B, the apt-get automatically looks for package B, installs it in prior to installation of A, makes all process smooth for A's installation and then looks for A and installs it. 
So, where does apt-get looks for any package? Should I search manually and provide any web location while issuing this command? The answer is, apt-get looks internally into 'a file' which has list of web address locations for all necessary and new released packages in Ubuntu. This list is stored locally in your hard drive, which apt-get refers into, when you type to install particular package. Ubuntu has many such repository FTP or HTTP web locations whole over the world, where a stable and new release packages will be added and kept ready for all users. The apt-get will look into those web locations to install your favorite packages. You can look in to that file under /etc/apt/sources.list.
What if Ubuntu you have installed is older one and many repositories might have updated with latest packages? How do you install those latest files into your machine? How do you know the package you have installed is very latest version? So, it is important you should have new list of addresses in your sources.list file, isn't it. In order to update the file list, type sudo apt-get update and press enter. Doing this will update the list of link's information in to your local machine with new released package repository addresses. Do it atleast once in a month or when you feel there is a new version of any package which you want to install is available. Let's see all commands in apt-get.
To Install a Package:
It is quite simple installing a package using apt-get. Use
sudo apt-get install package_name
For example to install Bin-Utils package (about which we cover later) type,
$sudo apt-get install binutils 
and press enter. You can see all the set of operation which apt-get taken care. The apt-get may ask for further permission while installing your package, say yes to it. Next steps will be handled by apt-get automatically, it installs all the dependencies of Bin-Utils before installing Bin-Utils itself.
To Remove a package:
removal of unnecessary package is also so easy, use
sudo apt-get remove package_name
For example to remove Bin-Utils, type sudo apt-get remove binutils and press enter. Be careful when you remove any package, it shouldn't hurt operating system's operation.
To Update package index:
As we have seen earlier, to update a package index use sudo apt-get update.
To Upgrade a package:
Over the time, updated versions for particular package may be available from the package repositories (for example may be some security updats). To upgrade your package, first update your package index as outlined above, and then type sudo apt-get upgrade package_name. Here package_name is optional, if you wish to upgrade a particular package then you can input the package name in argument, otherwise, be aware, all the packages installed in your system will get upgraded.
Some people get confuse with Update and Upgrade. The sudo apt-get update will update the list of database repository system, where as sudo apt-get upgrade will upgrade your package to new release version.
However, while Upgrading if the package needs to install or remove its new dependencies (if any), it will not be upgraded by the sudo apt-get upgrade command. For such an upgrade, it is necessary to use the sudo apt-get dist-upgrade command. After a fairly considerable amount of time, your computer will be upgraded to the new revision.
For further information about the use of APT type sudo apt-get help, and read.
In RedHat and Fedora Linux systems, the RedHat Package Manager will be used. It also has command line tool for installing packages, that is, yum install.

Let's conclude here. In next post I will discuss further steps in installing Tar files, Zip files and other executable files in Ubuntu. And we will also see some essential packages you must have in your system. Enjoy till then.