Compiling MPI Programs
using the MPICH libraries

 

The MPICH implimentation of the MPI Standard version 1.1 has been installed for use on both church and maccs in the /usr/local/mpich-1.2.0/bin directory. Platform dependant executables that were built during the mpich installation (such as mpicc and mpirun) are located in /usr/local/mpich-1.2.0/bin/lib/PLATFORM/ch_p4 but may be accessed through /usr/local/mpich-1.2.0/bin on any machine.

Two utilities have been written to help you compile MPI programs.

Makefile template

This template uses pattern rules to append the appropriate architecture name to both the object files and the resulting executable. This is important when cross-compiling. Click here for tips on using the template:

# Makefile.template
#	- template to assist with the compilation of MPI programs
# This template only works in ritchie till now (2000.12). In case of the directory 
# changed, ensure the exact MPI home directory before using it.
#
# Written by Dave Alpert, June 1997
# Modified from the file generated automatically by configure from Makefile.in 
ALL: default
# Begin MPICH configured options #
PATH        = /local/bin:/usr/local/bin:/usr/local/mpich-1.2.0/bin:/usr/bin:/usr/ucb
SHELL       = /bin/sh
# ARCH      = must be defined as an environment var {ie. solaris, sun4 . . .}
COMM        = ch_p4
MPIR_HOME   = /usr/local/mpich-1.2.0/bin
CC          = /usr/local/mpich-1.2.0/bin/mpicc
CLINKER     = $(CC)
F77         = /usr/local/mpich-1.2.0/bin/mpif77
FLINKER     = $(F77)
CCC         = /usr/local/mpich-1.2.0/bin/mpiCC
CCLINKER    = $(CCC)
AR          = ar cr
RANLIB      = ranlib
PROFILING   = $(PMPILIB)
OPTFLAGS    = 
MPE_GRAPH   = -DMPE_GRAPHICS
# End MPICH configured options #
CFLAGS    = $(OPTFLAGS) 
CFLAGSMPE = $(CFLAGS) -I$(MPE_DIR) $(MPE_GRAPH)
CCFLAGS   = $(CFLAGS)
#FFLAGS   = '-qdpc=e' 
FFLAGS    = $(OPTFLAGS)

##### User configurable options #####
EXEC      = <your executable name>.$(ARCH)
USRLIBS   = -L<your library path> -l<your libraries>
SOURCES   = <your source files>
### End User configurable options ###
OBJS      = $(SOURCES:%.c=%_$(ARCH).o)

default: $(EXEC)

all: default

%_$(ARCH).o: %.c
	$(CC) -c $(CFLAGS) -o $@ $(SOURCES);

<your executable name>.$(ARCH): $(OBJS)
	$(CLINKER) $(OPTFLAGS) $(USRLIBS) -o $@ $(OBJS)

 

This makefile template has been created to simplify cross-compiliation of MPI programs. To download a copy of this template, follow this link and save as a text file. All pink terms must be changed to reflect your project. To use this template, simply:

  1. edit the EXECS, SOURCES, and USRLIBS lines appropriately to reflect your project.
  2. change the target of the last dependancy rule to the name on the EXECS line.
  3. run make once on each operating system that you would like to build for, each time with "ARCH=platform" as one of the make command line arguements.

eg: church[109]: gmake ARCH=solaris

Note: platform must be one of the architecture codes recognized by MPICH.

 

The mpimake command

In order to run an MPI program simultaniously on machines with different operating systems, (i.e. church and maccs) you will need to build a seperate binary for each system. These binaries must be named filename.architecture , where architecture is one of the architecture codes recognized by MPICH.

The above makefile template creates such files, and also embeds the architecture name in the intermediate object files, allowing you to maintain seperate object files for each operating system in the same working directory.

The mpimake command was created to automate cross-compilation with makefiles based on the template given above. The command performs a remote shell on a machine running each operating system platform that you specify, switches to the directory from which it was called, and runs gmake with the ARCH variable set appropriately.

Architecture platforms are indicated by use of the "-T" switch, and once again must comply with the architecture codes recognized by MPICH.

mpimake -T solaris

for example, will remote shell to church and run

gmake ARCH=solaris

in the appropriate directory.

If no platforms are specified, mpimake defaults to run on each platform for which MPICH is configured.

The mpirun command

For all systems, the mpirun command is used to run an MPI program. This is found in the /usr/local/mpich-1.2.0/bin directory. You might want to add it to your path, by adding the following line to the .cshrc file in your home directory.

set path=($path /usr/local/mpich-1.2.0/bin/)

Before running the program, we must check whether other machines are reachable. To ensure that just type:

rsh MachineName uptime

For now (2001-03), from wolf1 to wolf50, and ritchie, birkhoff are reachable. So the we can execute programs on those workstations whose OS is Solaris. Note that the file $HOME/.rhosts must include the machine that you want to use. The format of .rhosts likes: wolf1.cas.mcmaster.ca username wolf30.cas.mcmaster.ca username ...

Use the command

mpirun -arch solaris -np 4 program.solaris

to run the program 'program.solaris' on a solaris machine on four processes.

Multiple architectures may be handled by giving multiple -arch and -np arguments. For example, to run a program on 3 solariss and 2 sun4s, with the local machine being a solaris, use

mpirun -arch solaris -np 3 -arch sun4 -np 2 program.%a

The string '%a' will be replaced with the architecture name. You must specify the architecture with -arch before specifying the number of processors with -np. Also, the first -arch must refer to the local machine.

You can specify the group of workstations upon which you want to exexute the program, use

mpirun -arch solaris -np 3 -arch sun4 -np 2 -machinefile filename program.%a

It is recommended the machines you specify in the machine file are included in the file $HOME/.rhosts. and execute programs on workstations wolf1 to wolf50 since they have the same hardware and software configuration. If you don't specify a machine file, the system will use the host birkhoff by default.

The complete list of options for mpirun can be viewed by

mpirun -help


Last updated Mar 2001 by Yu Wu
Written by Dave Alpert, Sept 1998