English Finnish
%files openmpi # All openmpi linked files
|File type |Placement
|Binaries |`+%{_libdir}/%{name}/bin+`
|Libraries |`+%{_libdir}/%{name}/lib+`
|[[PackagingDrafts/Fortran |Fortran modules]] |`+%{_fmoddir}/%{name}+`
|[[Packaging/Python |Python modules]] |`+%{python2_sitearch}/%{name}+`
|Config files |`+%{_sysconfdir}/%{name}-%{_arch}+`
|File type |Placement
|Man pages |`+%{_mandir}/%{name}-%{_arch}+`
|Include files |`+%{_includedir}/%{name}-%{_arch}+`
If possible, the packager MUST package versions for each MPI compiler in Fedora (e.g. if something can only be built with mpich and mvapich2, then mvapich1 and openmpi packages do not need to be made).
If the environment module sets compiler flags such as `+CFLAGS+` (thus overriding the ones exported in `+%configure+`, the RPM macro MUST make them use the Fedora optimization flags `+%{optflags}+` once again (as in the example above in which the openmpi-%\{_arch} module sets CFLAGS).
In case the headers are the same regardless of the compilation method and architecture (e.g. 32-bit serial, 64-bit Open MPI, MPICH), they MUST be split into a separate `+-headers+` subpackage (e.g. 'foo-headers'). Fortran modules are architecture specific and as such are placed in the (MPI implementation specific) `+-devel+` package (foo-devel for the serial version and foo-openmpi-devel for the Open MPI version).
# Install serial version
make -C serial install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
# Install MPICH version
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
# Install OpenMPI version
make -C $MPI_COMPILER install DESTDIR=%{buildroot} INSTALL="install -p" CPPROG="cp -p"
Introduction Johdanto
loading and unloading the compiler in spec files is as easy as `+%{_openmpi_load}+` and `+%{_openmpi_unload}+`.
Message Passing Interface
Message Passing Interface (MPI) is an API for parallelization of programs across multiple nodes and has been around since 1994 https://en.wikipedia.org/wiki/Message_Passing_Interface[1]. MPI can also be used for parallelization on SMP machines and is considered very efficient in it too (close to 100% scaling on parallelizable code as compared to ~80% commonly obtained with threads due to unoptimal memory allocation on NUMA machines). Before MPI, about every manufacturer of supercomputers had their own programming language for writing programs; MPI made porting software easy.
module load mpi
#module load mpi/mvapich2-i386
#module load mpi/openmpi-i386
module load mpi/mpich-i386
MPI implementation specific files MUST be installed in the directories used by the used MPI compiler (`+$MPI_BIN+`, `+$MPI_LIB+` and so on).
MUST: By default, NO files are placed in `+/etc/ld.so.conf.d+`. If the packager wishes to provide alternatives support, it MUST be placed in a subpackage along with the ld.so.conf.d file so that alternatives support does not need to be installed if not wished for.
MUST: If the maintainer wishes for the environment module to load automatically by use of a scriptlet in /etc/profile.d or by some other mechanism, this MUST be done in a subpackage.
MUST: The MPI compiler package MUST provide an RPM macro that makes loading and unloading the support easy in spec files, e.g. by placing the following in `+/etc/rpm/macros.openmpi+`
Name: foo
Requires: %{name}-common = %{version}-%{release}