In QUANTUM ESPRESSO several MPI parallelization levels are implemented, in which both calculations and data structures are distributed across processors. Processors are organized in a hierarchy of groups, which are identified by different MPI communicators level. The groups hierarchy is as follow:
mpirun -np 4096 ./neb.x -ni 8 -nk 2 -nt 4 -nd 144 -i my.inputThis executes a NEB calculation on 4096 processors, 8 images (points in the configuration space in this case) at the same time, each of which is distributed across 512 processors. k-points are distributed across 2 pools of 256 processors each, 3D FFT is performed using 4 task groups (64 processors each, so the 3D real-space grid is cut into 64 slices), and the diagonalization of the subspace Hamiltonian is distributed to a square grid of 144 processors (12x12).
Default values are: -ni 1 -nk 1 -nt 1 ; nd is set to 1 if ScaLAPACK is not compiled, it is set to the square integer smaller than or equal to half the number of processors of each pool.
Since v.4.1, ScaLAPACK can be used to diagonalize block distributed matrices, yielding better speed-up than the internal algorithms for large ( > 1000 x 1000) matrices, when using a large number of processors (> 512). You need to have -D__SCALAPACK added to DFLAGS in make.sys, LAPACK_LIBS set to something like:
LAPACK_LIBS = -lscalapack -lblacs -lblacsF77init -lblacs -llapackThe repeated -lblacs is not an error, it is needed! configure tries to find a ScaLAPACK library, unless configure -with-scalapack=no is specified. If it doesn't, inquire with your system manager on the correct way to link it.
A further possibility to expand scalability, especially on machines like IBM BlueGene, is to use mixed MPI-OpenMP. The idea is to have one (or more) MPI process(es) per multicore node, with OpenMP parallelization inside a same node. This option is activated by configure -with-openmp, which adds preprocessing flag -D__OPENMP and one of the following compiler options:
ifort | -openmp |
xlf | -qsmp=omp |
PGI | -mp |
ftn | -mp=nonuma |
OpenMP parallelization is currently implemented and tested for the following combinations of FFTs and libraries:
internal FFTW copy | requires -D__FFTW |
ESSL | requires -D__ESSL or -D__LINUX_ESSL, link with -lesslsmp |
Currently, ESSL (when available) are faster than internal FFTW.
The ``distributed'' format is fast and simple, but the data so produced is readable only by a job running on the same number of processors, with the same type of parallelization, as the job who wrote the data, and if all files are on a file system that is visible to all processors (i.e., you cannot use local scratch directories: there is presently no way to ensure that the distribution of processes across processors will follow the same pattern for different jobs).
Currently, CP uses the ``collected'' format; PWscf uses the ``distributed'' format, but has the option to write the final data file in ``collected'' format (input variable wf_collect) so that it can be easily read by CP and by other codes running on a different number of processors.
In addition to the above, other restrictions to file interoperability apply: e.g., CP can read only files produced by PWscf for the k = 0 case.
The directory for data is specified in input variables outdir and prefix (the former can be specified as well in environment variable ESPRESSO_TMPDIR): outdir/prefix.save. A copy of pseudopotential files is also written there. If some processor cannot access the data directory, the pseudopotential files are read instead from the pseudopotential directory specified in input data. Unpredictable results may follow if those files are not the same as those in the data directory!
IMPORTANT: Avoid I/O to network-mounted disks (via NFS) as much as you can! Ideally the scratch directory outdir should be a modern Parallel File System. If you do not have any, you can use local scratch disks (i.e. each node is physically connected to a disk and writes to it) but you may run into trouble anyway if you need to access your files that are scattered in an unpredictable way across disks residing on different nodes.
You can use input variable disk_io to reduce the the amount of I/O done by pw.x. Since v.5.1, the dafault value is disk_io='low', so the code will store wavefunctions into RAM and not on disk during the calculation. Specify disk_io='medium' only if you have too many k-points and you run into trouble with memory; choose disk_io='none' if you do not need to keep final data files.
For very large cp.x runs, you may consider using wf_collect=.false., memory='small' and saverho=.false. to reduce I/O to the strict minimum.