As a note, this tutorial is written for bash. The only places where it really doesn't apply to csh and its derivatives are in the setting of environment variables. Setting environment variables in bash looks like
- export ENVIRONMENT_VARIABLE=some_value
Setting environment variables in csh looks like
- setenv ENVIRONMENT_VARIABLE "some_value"
If you have not set your computer up yet, and you have one of the following systems, make sure you set up your computer so you have the necessary prerequisites before trying to build Amber 12 and AmberTools 12 (or AmberTools 13).
Before you start building Amber 12 and AmberTools 12, make sure that you've set your computer up properly, including the necessary prerequisite packages. In particular, make sure you have
- A working Fortran compiler (gfortran, ifort, or pgf90)
- A working C compiler (gcc, icc, or pgcc)
- A working C++ compiler (g++, icpc, or pgCC)
- flex
- X11 development files (for xleap)
- patch (to apply bugfixes)
- csh (C-shell, or some variant, like tcsh, that provides a csh executable in /bin)
- GZ-compression libraries (for MTK++ and cpptraj to handle gzip-compressed files)
- BZ2-compression libraries (for cpptraj to handle bzip2-compressed files)
Most of the above pre-requisites are checked for during the configure stage, so if you don't know whether you have it or not, it's not a very big deal.
Table of Contents
|
Updating with Bug Fixes and Patches
It is impossible to develop a program suite as extensive as Amber and AmberTools that is guaranteed to be devoid of bugs. In reality, because Amber can do so many things, it's impossible to truly test all of its capabilities (and to catch all illegal combinations of input options). Therefore, bug fixes to the source code (analogous to Updates to other softwares like Mac OS X, Windows, etc.) are periodically released as bugs are found and fixed.
The configure script automatically checks for any updates that may be available and asks for permission to apply them if updates exist. We recommend you always say 'yes' to make sure you have the most up-to-date version of Amber available.
You can periodically check if there are updates available using the command
$AMBERHOME/update_amber --check
If you see that updates exist, you can simply repeat the installation procedure to make sure you have updated all parts of your code. If you are confident in what you are doing, you can simply recompile the tools affected by the updates you just applied. If you are at all uncertain, though, just recompile everything.
You should always run this command before reporting a bug to the Amber list in case that bug has already been found and fixed.
For 99% of use-cases, updating is as easy as saying 'yes' to configure. For instances where internet access is blocked or complicated (via an authenticated proxy, for instance), visit this page for advanced instructions on using update_amber: Using update_amber.
Basic AmberTools 13 + Amber 12 install (Serial)
Installing Amber 12 and AmberTools 12 is very easy now. It's only a single step for both AmberTools 12 and Amber 12. The instructions for building just AmberTools 12 are the same.
tar jxvf AmberTools13.tar.bz2
tar jxvf Amber12.tar.bz2
cd amber12
export AMBERHOME=`pwd`
./configure gnu
# We recommend you say "yes" when asked to apply updates
make install # "make -j # install" works to use # processors to make the build go faster
The above commands will extract the AmberTools and Amber source codes into an Amber 12 directory and properly set the AMBERHOME environment variable. Make sure that AMBERHOME is always set correctly. You can add that to your ~/.bashrc file if you want. (I typically do.)
If you wish to test this installation right now, follow these commands with
make test
Cygwin users
If you are trying to build Amber in Cygwin, there are some additional instructions. Many people (me included), think the easiest way is to use VirtualBox or some other virtual machine software to install a true Linux OS on your machine, and just run Amber on that.
If you want to use Cygwin, however, follow these commands (note the differences to above):
tar jxvf AmberTools13.tar.bz2
tar jxvf Amber12.tar.bz2
cd amber12
export AMBERHOME=`pwd`
./update_amber --apply http://ambermd.org/bugfixes/AmberTools/13.0/cygwin_fix
./configure -cygwin gnu
# We recommend you say "yes" when asked to apply updates
make install # "make -j # install" works to use # processors to make the build go faster
Using a different Python
By default, configure will look for a compatible Python to use for its Python programs (ParmEd and MMPBSA.py, along with their APIs). It will look for python2.7, then python2.6, then python2.5, then python2.4, and finally settle on python if none of the others are available. If you want to use a specific Python version, you can use the --with-python </path/to/python> flag to configure to specify a particular Python version.
Note, Python 3.X is not supported by most codes currently.
Building AmberTools 12 and Amber 12 in parallel
Building in parallel requires a working MPI installation. This is the most difficult part of building Amber in parallel, and if you experience problems in this step it is likely because your MPI was either built or used incorrectly (for Amber). You must use the same compilers to build Amber that were used to build the MPI. If you are not sure which compilers were used to build the MPI, use these commands:
mpif90 -show
mpicc -show
Also make sure that the mpirun and mpiexec packages come from the same place as mpif90 and mpicc.
NOTE: If you use your package manager to install your MPI, you MUST use the GNU compilers.
The sequence of commands below assumes that you have already set AMBERHOME properly.
cd $AMBERHOME
./configure -mpi gnu # cygwin users need the -cygwin flag again!
make install
If you wish to test this build right now, you can set the environment variable DO_PARALLEL to specify multiple threads (e.g., export DO_PARALLEL='mpirun -np 2' or export DO_PARALLEL='mpirun -np 8') and run the command
make test
Note, I suggest testing both 2 threads and 8 threads, since some tests require different numbers of processors. Tests with 2 and 8 threads should get almost every test. There is a section below on testing.
Building CUDA-enabled Amber 12 (pmemd.cuda Amber 12 only!)
I have never tried building pmemd.cuda or pmemd.cuda.MPI on Windows, so there will be nothing about that here. I think it is possible.
I have, however, tried building it on Mac OS X some time ago. I couldn't do it (the code has changed a lot since then, so it might be possible now). However, Apple doesn't use NVIDIA hardware much, and pmemd.cuda will not run on ATI hardware. Thus, if you are one of those few people that have a Mac with qualifying hardware, I'm sorry.
As a result of the above eliminations, this section deals with the Linux operating system only.
pmemd has been highly optimized to run very efficiently on NVIDIA hardware that supports the CUDA programming language (and has hardware double-precision support). For a summary on this capability, see http://ambermd.org/gpus/.
If you need to install the NVIDIA developer driver and CUDA toolkit, see the section dedicated to it below.
cd $AMBERHOME
./configure -cuda gnu
make install
If you wish to test these programs immediately afterwards, you can run the command
make test
which will run the CUDA tests.
Building CUDA-enabled Amber in parallel
This will allow you to run pmemd.cuda.MPI to utilize multiple GPUs for a single calculation.
Then, configure for CUDA and MPI
cd $AMBERHOME
./configure -cuda -mpi gnu
make install
You should now have pmemd.cuda.MPI in $AMBERHOME/bin.
If you wish to test your installation right now, you would need to set DO_PARALLEL to specify multiple GPUs (e.g. export DO_PARALLEL='mpirun -np 2' to run on 2 GPUs), and run the command:
make test
Testing your installation
The Amber developers have made sure that the tests all pass (or at least come close) for a variety of platforms. However, it's impossible to do an exhaustive test given the sheer number of 'compatible' operating systems (even just Linux varieties) and compilers with compiler versions. For that reason, you are strongly encouraged to run the test suite. For a modest desktop, the test suite will take roughly 1 to 1.5 hours.
You can either run the test suites as you compile, or compile everything and run the test suite later. If you are reading this, I will assume that you chose to test the installation after building everything (otherwise you have followed the full advice above when compiling).
The results from all of the tests are found in $AMBERHOME/logs/
Testing in serial
To test your installation in serial, go to $AMBERHOME and type make test.serial
Testing in parallel
These instructions are more involved than testing in serial (hard to get much easier). To test in parallel, you first need to specify how many threads you wish to test with. You do this by setting the environment variable DO_PARALLEL to contain the MPI directives to launch an MPI program. For example, common MPI implementations use the program mpirun or mpiexec.
For these systems, to test your install in parallel using 2 threads, you would go to AMBERHOME and use the commands
export DO_PARALLEL='mpirun -np 2'
make test.parallel
You should also test with other thread counts (e.g. try 8 threads), since some tests require 4 (and even up to 8) threads to run.
Adding Amber to your Environment
This step assumes that you have already compiled every part of AmberTools and Amber that you want to use. Many programs in AmberTools require that the AMBERHOME environment variable be set, which you can do in your shell resource file (e.g. .cshrc or .bashrc). You also need to add $AMBERHOME/lib to the LD_LIBRARY_PATH so that nab programs can find shared libraries stored in $AMBERHOME/lib at runtime. Finally, many people (including me) adds $AMBERHOME/bin to the default search path to make it easier to run programs.
To do this in bash, put the following in your ~/.bashrc file:
export AMBERHOME=/path/to/amber12
export LD_LIBRARY_PATH=$AMBERHOME/lib\:$LD_LIBRARY_PATH
export PATH=$AMBERHOME/bin\:$PATH
To do this in csh, put the following in your ~/.cshrc file:
setenv AMBERHOME "/path/to/amber12"
setenv LD_LIBRARY_PATH "$AMBERHOME/lib:$LD_LIBRARY_PATH"
setenv PATH "$AMBERHOME/bin:$PATH"
Tips for Installing NVIDIA Developer Driver and CUDA Toolkit
Before installing pmemd.cuda, you need to install both the CUDA Toolkit and the CUDA developer driver for your hardware. They can be found on NVIDIA's website (http://developer.nvidia.com/cuda-downloads). As of CUDA 5.5, the driver can be installed optionally alongside the toolkit in the same package.
You may have to stop the X-server to install the driver. A quick tip to do this: press ALT-CTRL-1 to go to tty1 (this will close out the GUI, which resides on tty7). Log in, and obtain root privileges, then kill your X session. This is typically a command like:
bash$ sudo /etc/init.d/gdm stop
The gdm will vary from OS to OS. On Ubuntu, I think it is lightdm. For KDE, it is kdm (gdm is for GNOME). On Gentoo, it is handled by a script xdm. It is the GUI desktop manager — look for something that seems appropriate.
After you close the X-server, run the installer scripts given to you by NVIDIA. After you are done with the installer script, you can use the command:
bash$ sudo /etc/init.d/gdm start
to restart the GUI desktop manager. Alternatively, just reboot the machine.
Another common requirement for building CUDA programs is that the location of the CUDA libraries need to be made available to the linker. This is done via the LD_LIBRARY_PATH variable. Assuming you have set the CUDA_HOME environment variable (as is necessary for Amber), you should put the following line somewhere in your .bashrc file:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib:$CUDA_HOME/lib64