Skip to Main Content

The University of Tennessee

Newton header banner

Frequently Used Tools:

Home » Documentation » Using MPI

Using MPI on Newton systems

The supported MPI implementation on Newton systems is OpenMPI. The currently installed version is a full MPI2 implementation that is under active development, provides good support for the Grid Engine resource management (batch-queue) system, and automatically uses our high performance Infiniband network. There are a number of packages that are installed on Newton systems which are compiled to use OpenMPI. These packages will always be compiled against the newest installed version.

Configuring your account

In order to execute or compile code that uses MPI, your environment must be properly configured for the specific version of OpenMPI that you wish to use. The system is configured to automatically load a default version of OpenMPI when you log in. This is done using the Modules utility. You can use the command "module list" to list the currently loaded version of Openmpi:

[user@newton1 ~]$ module list
Currently Loaded Modulefiles:
  1) intel-compilers/11.1.072   3) openmpi/1.4.2-intel
  2) matlab/R2010a              4) defaults

The currently loaded version is 1.4.2 and was compiled to use the Intel compilers. If you wish to use this version, no action is needed. If you prefer to use another version, you can use the "module avail openmpi" command to list the other available versions and the "module switch" command to change the loaded version. These commands and other use of the modules system is described in detail in Managing Your Environment With Modules.

Compiling your code

If you are planning to compile code that links against the MPI libraries, you will probably need to make some small configuration changes to files in the software source distribution. We recommend using the compiler wrapper scripts that are provided with OpenMPI. These scripts will call the compiler with the correct flags to use the MPI libraries that you have previously chosen. The wrapper scripts are mpicc (C compiler), mpicxx (C++ compiler), mpif77 (Fortran 77 compiler), and mpif90 (Fortran 90 compiler).

If you have a "configure" script in your software source directory, you can probably set up the software by adding an option similar to "CC=mpicc" to the configure script arguments. E.g.:

./configure --prefix=~/software CC=mpicc FC=mpif90

If you code uses a makefile or if you compile the code by hand, simply replace the compiler name with the corresponding compiler wrapper. E.g.:

$ mpicc -o program -O3 program.c

Running MPI applications

Once you have an application binary, you will need to use the Grid Engine to schedule the job to run simultaneously on multiple nodes in the cluster. Please see Parallel Jobs for detailed information on running MPI jobs through the Grid Engine. In addition to using the Grid Engine to allocate a number of CPUs for your application, you will also need to use the "mpirun" utility to distribute your application to the various nodes where your application will execute. You should do this by creating a batch submit file similar to this:

#$ -N MPIjob
#$ -cwd
#$ -pe openmpi* 32

mpirun application

This file uses the option "-pe" to request a 32-processor parallel environment in which to execute. The job calls the mpirun utility with the name of the application (binary file name) that you wish to execute. The mpirun utility will execute on the "master" job node for your job and will then query the Grid Engine in order to determine the number of CPUs and which specific nodes it should execute the application. You should not provide any options to the mpirun utility. The mpirun utility will then start an ssh session to each allocated node and execute the application there. From this point, it is up to your application and OpenMPI to set up communication between the multiple application processes and start your calculation.

OpenMPI is configred to automatically detect a high performance network on which to communicate between processes. On Newton systems, this will be the Infiniband fabric. If you are using the supported OpenMPI installation on Newton, you will not have to do further configuration to use Infiniband.