In Linux installing software can be done in more than one way. Software installed on a platform is always reccommended to be installed from the repositories using the yum or apt tools. These tools have a lot of logic in them to check for package consistency, resolve dependencies , compare local version to the one being installed, etc.
Yum and Apt will be discussed in other pages but suffice it to say they are the tools you must use (depending on your platform one will be preferred to the other. e.g. the rpm systems usually based on Red Hat redistributions called downstream distros use yum and Debian based distributions use apt as the preferred package manager subsystem.
There are occasions, rare on the average, but the more advanced the setup you are trying to deploy the more common this becomes, when you need to compile a later version from what your repository has. Now this is not a good practise according to stability buffs but it is a necessary one if you are going after security. This because repositories are compiled (or should be) with a certain amount of testing before a version is updated hence there is a natural lag behind the latest stable versions of any one given package. This especially true for packages that are heavily updated (usually due to security updates and bug fixes).
As always the two most important and valid reasons for running versions later than those in the repositories are :
- Security : a vulnerability becomes known that might compromise thsecurity of the service or system, and there has been a fix that has been tested.
- Features: new features are required that the older version in the repo does not support or was still in beta and has now been promoted to production ready.
On Compiling
Many times and in heaps of “official” documentation you get the “just use the usual configure, make, make install sequence” to get an application install with a sultry list of options you may pass to the configure usually in the release notes or such.
So the steps are :
As the history of things go configure is a new(er) utility when compared with make which has been around since the beginning of Unix to a greater or lesser degree. Each step has a very distinct purpose.
I am going to explain the second and third steps first, then come back to configure a bit later.
make
It is designed to automate the job of programmer / end compiler hence reducing the number of steps to a single command also including the checks and conditions required to compile the code under different circumstance.
The make utility has a set of built-in rules so one only needs to tell it what new things it needs to know to build your particular utility.To prepare to use make
, you must have a file called the makefile that describes the relationships among files in your program and provides commands for updating each file this will usually be in the src folder.In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files.Once a suitable makefile exists, each time you change some source files, this simple shell command:
make
suffices to perform all necessary recompilations. The make
program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the recipes recorded in the data base.
You can provide command line arguments to make
to control which files should be recompiled, or how.
The make utility is what helped Unix survive the “cross platform issues” from day one! In the olden days there was no other way in unix to install stuff besides download the code and compile it, and that was usually no simple task even for a programmer.
Old unix funny example was trying to run a make love command saying make love: something on these lines.
So.. the default file for additional rules is the Makefile in the current directory. If you have some source files for a program and there is a Makefile file there. It is sometime worth reading.
- The location of the C compiler
- Version numbers of the program and such.
- Settings for the heap size
- Linking object locations
- etc..
All this used to be done by editing the Makefile. This is where configure comes in. It is a shell script (generally written by GNU Autoconf) that goes up and looks for software and even tries various things to see what works. It then takes its instructions from Makefile.in and builds Makefile (and possibly some other files) that should align everything on the current system.
Let see it in practise:
You run
#>./configure
This creates a new Makefile.
Enter
#>make
the make command builds the program from the code in the source just downloaded using the makefile created but the configure script.
As root or using sudo, type
#>make install
This again invokes make, make finds the target install in Makefile and files the directions to install the program.
This is a very simplified explanation but, in most cases, this is what you need to know.
With most programs, there will be a file named INSTALL that contains installation instructions that will fill you in on other considerations. For example, it is common to supply some options to the configure command to change the final location of the executable program. There are also other make targets such as clean that remove unneeded files after an install and, in some cases test which allows you to test the software between the make and make install steps.