English Finnish
# Which MPI implementation do we use?
|Variable |Value |Explanation
|`+MPI_BIN+` |`+%{_libdir}/%{name}/bin+` |Binaries compiled against the MPI stack
|`+MPI_SYSCONFIG+` |`+%{_sysconfdir}/%{name}-%{_arch}+` |MPI stack specific configuration files
|`+MPI_FORTRAN_MOD_DIR+` |`+%{_fmoddir}/%{name}+` |MPI stack specific Fortran module directory
|`+MPI_INCLUDE+` |`+%{_includedir}/%{name}-%{_arch}+` |MPI stack specific headers
|`+MPI_LIB+` |`+%{_libdir}/%{name}/lib+` |Libraries compiled against the MPI stack
|`+MPI_MAN+` |`+%{_mandir}/%{name}-%{_arch}+` |MPI stack specific man pages
|`+MPI_PYTHON2_SITEARCH+` |`+%{python2_sitearch}/%{name}+` |MPI stack specific Python 2 modules
|`+MPI_PYTHON3_SITEARCH+` |`+%{python3_sitearch}/%{name}+` |MPI stack specific Python 3 modules
|`+MPI_COMPILER+` |`+%{name}-%{_arch}+` |Name of compiler package, for use in e.g. spec files
|`+MPI_SUFFIX+` |`+_%{name}+` |The suffix used for programs compiled against the MPI stack
to prevent concurrent loading of multiple mpi modules.
# To avoid replicated code define a build macro
%define dobuild() \
mkdir $MPI_COMPILER; \
cd $MPI_COMPILER; \
%dconfigure --program-suffix=$MPI_SUFFIX ;\
make %{?_smp_mflags} ; \
cd ..
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) MUST be packaged into %\{name}, and the development headers and libraries into %\{name}-devel.
There are many MPI implementations available, such as https://www.open-mpi.org/[Open MPI] (the default MPI compiler in Fedora and the MPI compiler used in RHEL), https://www.mpich.org/[MPICH] (in Fedora and RHEL) and https://mvapich.cse.ohio-state.edu/[MVAPICH1 and MVAPICH2] (in RHEL but not yet in Fedora).
The MPI enabled bits MUST be placed in a subpackage with the suffix denoting the MPI compiler used (for instance: `+foo-openmpi+` for Open MPI [the traditional MPI compiler in Fedora] or `+foo-mpich+` for MPICH). For directory ownership and to guarantee the pickup of the correct MPI runtime, the MPI subpackages MUST require the correct MPI compiler's runtime package.
The module file MUST prepend `+$MPI_BIN+` into the user's `+PATH+` and prepend `+$MPI_LIB+` to `+LD_LIBRARY_PATH+`. The module file MUST also set some helper variables (primarily for use in spec files):
The module file MUST have the line:
The module file MUST be installed under `+%{_sysconfdir}/modulefiles/mpi+`. This allows as user with only one mpi implementation installed to load the module with:
The files of MPI compilers MUST be installed in the following directories:
The binaries MUST be suffixed with `+$MPI_SUFFIX+` (e.g. _openmpi for Open MPI, _mpich for MPICH and _mvapich2 for MVAPICH2). This is for two reasons: the serial version of the program can still be run when an MPI module is loaded and the user is always aware of the version s/he is running. This does not need to hurt the use of shell scripts:
Software that supports MPI MUST be packaged also in serial mode [i.e. no MPI], if it is supported by upstream. (for instance: `+foo+`).
# Run preprocessor
foo -preprocess < foo.in
# Run calculation
mpirun -np 4 foo${MPI_SUFFIX}
# Run some processing
mpirun -np 4 bar${MPI_SUFFIX} -process
# Collect results
bar -collect
Packaging of MPI software
Packaging of MPI compilers
%package openmpi
BuildRequires: openmpi-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: openmpi
Requires: %{name}-common = %{version}-%{release}
%package mpich
BuildRequires: mpich-devel
# Require explicitly for dir ownership and to guarantee the pickup of the right runtime
Requires: mpich
Requires: %{name}-common = %{version}-%{release}
%package common
%_openmpi_load \
. /etc/profile.d/modules.sh; \
module load mpi/openmpi-%{_arch}; \
export CFLAGS="$CFLAGS %{optflags}";
%_openmpi_unload \
. /etc/profile.d/modules.sh; \
module unload mpi/openmpi-%{_arch};