English Finnish
Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 https://en.wikipedia.org/wiki/Message_Passing_Interface[1]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy.
There are many MPI implementations available, such as https://www.open-mpi.org/[Open MPI] (the default MPI compiler in Fedora and the MPI compiler used in RHEL), https://www.mpich.org/[MPICH] (in Fedora and RHEL) and https://mvapich.cse.ohio-state.edu/[MVAPICH1 and MVAPICH2] (in RHEL but not yet in Fedora).
As some MPI libraries work better on some hardware than others, and some software works best with some MPI library, the selection of the library used must be done in user level, on a session specific basis. Also, people doing high performance computing may want to use more efficient compilers than the default one in Fedora (gcc), so one must be able to have many versions of the MPI compiler each compiled with a different compiler installed at the same time. This must be taken into account when writing spec files.
Packaging of MPI compilers
The files of MPI compilers MUST be installed in the following directories:
|File type |Placement
|Binaries |`+%{_libdir}/%{name}/bin+`
|Libraries |`+%{_libdir}/%{name}/lib+`
|[[PackagingDrafts/Fortran |Fortran modules]] |`+%{_fmoddir}/%{name}+`
|[[Packaging/Python |Python modules]] |`+%{python2_sitearch}/%{name}+`
`+%{python3_sitearch}/%{name}+`
|Config files |`+%{_sysconfdir}/%{name}-%{_arch}+`
As include files and manual pages are bound to overlap between different MPI implementations, they MUST also placed outside normal directories. It is possible that some man pages or include files (either those of the MPI compiler itself or of some MPI software installed in the compiler's directory) are architecture specific (e.g. a definition on a 32-bit arch differs from that on a 64-bit arch), the directories that MUST be used are as follows:
|File type |Placement
|Man pages |`+%{_mandir}/%{name}-%{_arch}+`
|Include files |`+%{_includedir}/%{name}-%{_arch}+`
Architecture independent parts (except headers which go into `+-devel+`) MUST be placed in a `+-common+` subpackage that is `+BuildArch: noarch+`.
The runtime of MPI compilers (mpirun, the libraries, the manuals etc) MUST be packaged into %\{name}, and the development headers and libraries into %\{name}-devel.
As the compiler is installed outside `+PATH+`, one needs to load the relevant variables before being able to use the compiler or run MPI programs. This is done using xref:EnvironmentModules.adoc[environment modules].
The module file MUST be installed under `+%{_sysconfdir}/modulefiles/mpi+`. This allows as user with only one mpi implementation installed to load the module with:
module load mpi
The module file MUST have the line:
conflict mpi
to prevent concurrent loading of multiple mpi modules.
The module file MUST prepend `+$MPI_BIN+` into the user's `+PATH+` and prepend `+$MPI_LIB+` to `+LD_LIBRARY_PATH+`. The module file MUST also set some helper variables (primarily for use in spec files):
|Variable |Value |Explanation
|`+MPI_BIN+` |`+%{_libdir}/%{name}/bin+` |Binaries compiled against the MPI stack
|`+MPI_SYSCONFIG+` |`+%{_sysconfdir}/%{name}-%{_arch}+` |MPI stack specific configuration files
|`+MPI_FORTRAN_MOD_DIR+` |`+%{_fmoddir}/%{name}+` |MPI stack specific Fortran module directory
|`+MPI_INCLUDE+` |`+%{_includedir}/%{name}-%{_arch}+` |MPI stack specific headers
|`+MPI_LIB+` |`+%{_libdir}/%{name}/lib+` |Libraries compiled against the MPI stack
|`+MPI_MAN+` |`+%{_mandir}/%{name}-%{_arch}+` |MPI stack specific man pages
|`+MPI_PYTHON2_SITEARCH+` |`+%{python2_sitearch}/%{name}+` |MPI stack specific Python 2 modules
|`+MPI_PYTHON3_SITEARCH+` |`+%{python3_sitearch}/%{name}+` |MPI stack specific Python 3 modules
|`+MPI_COMPILER+` |`+%{name}-%{_arch}+` |Name of compiler package, for use in e.g. spec files
|`+MPI_SUFFIX+` |`+_%{name}+` |The suffix used for programs compiled against the MPI stack
As these directories may be used by software using the MPI stack, the MPI runtime package MUST own all of them.
MUST: By default, NO files are placed in `+/etc/ld.so.conf.d+`. If the packager wishes to provide alternatives support, it MUST be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.