* ``-DCMAKE_C_COMPILER=xxx`` equal to the name of the C99 `Compiler`_ you wish to use (or the environment variable ``CC``)
* ``-DCMAKE_CXX_COMPILER=xxx`` equal to the name of the C++98 `compiler`_ you wish to use (or the environment variable ``CXX``)
* ``-DGMX_MPI=on`` to build using `MPI support`_ (generally good to combine with `building only mdrun`_)
-* ``-DGMX_GPU=on`` to build using nvcc to run using NVIDIA `CUDA GPU acceleration`_ or an OpenCL_ GPU
-* ``-DGMX_USE_OPENCL=on`` to build with OpenCL_ support enabled. ``GMX_GPU`` must also be set.
+* ``-DGMX_GPU=CUDA`` to build with NVIDIA CUDA support enabled.
+* ``-DGMX_GPU=OpenCL`` to build with OpenCL_ support enabled.
* ``-DGMX_SIMD=xxx`` to specify the level of `SIMD support`_ of the node on which |Gromacs| will run
* ``-DGMX_BUILD_MDRUN_ONLY=on`` for `building only mdrun`_, e.g. for compute cluster back-end nodes
* ``-DGMX_DOUBLE=on`` to build |Gromacs| in double precision (slower, and not normally useful)
|Gromacs| can be compiled for many operating systems and
architectures. These include any distribution of Linux, Mac OS X or
Windows, and architectures including x86, AMD64/x86-64, several
-PowerPC including POWER8, ARM v7, ARM v8, and SPARC VIII.
+PowerPC including POWER8, ARM v8, and SPARC VIII.
Compiler
^^^^^^^^
-|Gromacs| can be compiled on any platform with ANSI C99 and C++14
+|Gromacs| can be compiled on any platform with ANSI C99 and C++17
compilers, and their respective standard C/C++ libraries. Good
performance on an OS and architecture requires choosing a good
compiler. We recommend gcc, because it is free, widely available and
frequently provides the best performance.
You should strive to use the most recent version of your
-compiler. Since we require full C++14 support the minimum supported
+compiler. Since we require full C++17 support the minimum supported
compiler versions are
-* GNU (gcc) 5.1
-* Intel (icc) 17.0.1
-* LLVM (clang) 3.6
-* Microsoft (MSVC) 2017
+* GNU (gcc/libstdc++) 7
+* Intel (icc) 19.1
+* LLVM (clang/libc++) 5
+* Microsoft (MSVC) 2017 15.7
Other compilers may work (Cray, Pathscale, older clang) but do
not offer competitive performance. We recommend against PGI because
components beside the compiler itself (e.g. assembler or linker);
these are often shipped by your OS distribution's binutils package.
-C++14 support requires adequate support in both the compiler and the
+C++17 support requires adequate support in both the compiler and the
C++ library. The gcc and MSVC compilers include their own standard
libraries and require no further configuration. If your vendor's
compiler also manages the standard library library via compiler flags,
On Linux, both the Intel and clang compiler use the libstdc++ which
comes with gcc as the default C++ library. For |Gromacs|, we require
-the compiler to support libstc++ version 5.1 or higher. To select a
+the compiler to support libstc++ version 7.1 or higher. To select a
particular libstdc++ library, provide the path to g++ with
``-DGMX_GPLUSPLUS_PATH=/path/to/g++``.
On Windows with the Intel compiler, the MSVC standard library is used,
-and at least MSVC 2017 is required. Load the enviroment variables with
+and at least MSVC 2017 15.7 is required. Load the enviroment variables with
vcvarsall.bat.
To build with clang and llvm's libcxx standard library, use
been deprecated for years, it is not supported.
For example, depending on your actual MPI library, use ``cmake
--DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx -DGMX_MPI=on``.
+-DMPI_C_COMPILER=mpicc -DGMX_MPI=on``.
CMake
512-wide AVX, including KNL, add ``--enable-avx512`` also.
FFTW will create a fat library with codelets for all different instruction sets,
and pick the fastest supported one at runtime.
-On ARM architectures with NEON SIMD support and IBM Power8 and later, you
+On ARM architectures with SIMD support and IBM Power8 and later, you
definitely want version 3.3.5 or later,
and to compile it with ``--enable-neon`` and ``--enable-vsx``, respectively, for
SIMD support. If you are using a Cray, there is a special modified
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Run-time detection of hardware capabilities can be improved by
- linking with hwloc, which is automatically enabled if detected.
+ linking with hwloc. By default this is turned off since it might
+ not be supported everywhere, but if you have hwloc installed it
+ should work by just setting ``-DGMX_HWLOC=ON``
* Hardware-optimized BLAS and LAPACK libraries are useful
for a few of the |Gromacs| utilities focused on normal modes and
matrix manipulation, but they do not provide any benefits for normal
``-DGMX_USE_LMFIT=none``.
* zlib is used by TNG for compressing some kinds of trajectory data
* Building the |Gromacs| documentation is optional, and requires
- ImageMagick, pdflatex, bibtex, doxygen, python 3.5, sphinx
+ ImageMagick, pdflatex, bibtex, doxygen, python 3.6, sphinx
|EXPECTED_SPHINX_VERSION|, and pygments.
* The |Gromacs| utility programs often write data files in formats
suitable for the Grace plotting tool, but it is straightforward to
::
- cmake .. -DGMX_GPU=ON -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
+ cmake .. -DGMX_GPU=CUDA -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
can be used to build with CUDA GPUs, MPI and install in a custom
location. You can even save that in a shell script to make it even
12. ``IBM_VSX`` Power7, Power8, Power9 and later have this.
13. ``ARM_NEON`` 32-bit ARMv7 with NEON support.
14. ``ARM_NEON_ASIMD`` 64-bit ARMv8 and later.
+15. ``ARM_SVE`` 64-bit ARMv8 and later with the Scalable Vector Extensions (SVE).
+ The SVE vector length is fixed at CMake configure time. The default vector
+ length is automatically detected, and this can be changed via the
+ ``GMX_SIMD_ARM_SVE_LENGTH`` CMake variable.
The CMake configure system will check that the compiler you have
chosen can target the architecture you have chosen. mdrun will check
::
- cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
+ cmake .. -DGMX_GPU=CUDA -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
(or whichever path has your installation). In some cases, you might
need to specify manually which of your C++ compilers should be used,
The GPU acceleration has been tested on AMD64/x86-64 platforms with
Linux, Mac OS X and Windows operating systems, but Linux is the
-best-tested and supported of these. Linux running on POWER 8, ARM v7 and v8
+best-tested and supported of these. Linux running on POWER 8 and ARM v8
CPUs also works well.
Experimental support is available for compiling CUDA code, both for host and
::
- cmake .. -DGMX_GPU=ON -DGMX_USE_OPENCL=ON
+ cmake .. -DGMX_GPU=OpenCL
To build with support for Intel integrated GPUs, it is required
to add ``-DGMX_OPENCL_NB_CLUSTER_SIZE=4`` to the cmake command line,
::
- cmake .. -DGMX_GPU=ON -DGMX_USE_OPENCL=ON -DclFFT_ROOT_DIR=/path/to/your/clFFT -DGMX_EXTERNAL_CLFFT=TRUE
+ cmake .. -DGMX_GPU=OpenCL -DclFFT_ROOT_DIR=/path/to/your/clFFT -DGMX_EXTERNAL_CLFFT=TRUE
Static linking
~~~~~~~~~~~~~~
configuration.
Often it is possible to ensure portability by choosing the least
-common denominator of SIMD support, e.g. SSE2 for x86, and ensuring
-the you use ``cmake -DGMX_USE_RDTSCP=off`` if any of the target CPU
+common denominator of SIMD support, e.g. SSE2 for x86. In rare cases
+of very old x86 machines, ensure that
+you use ``cmake -DGMX_USE_RDTSCP=off`` if any of the target CPU
architectures does not support the ``RDTSCP`` instruction. However, we
discourage attempts to use a single |Gromacs| installation when the
execution environment is heterogeneous, such as a mix of AVX and
that a tolerance is just a tiny bit too tight. Check the output files
the script directs you too, and try a different or newer compiler if
the errors appear to be real. If you cannot get it to pass the
-regression tests, you might try dropping a line to the gmx-users
-mailing list, but then you should include a detailed description of
+regression tests, you might try dropping a line to the
+`|Gromacs| users forum <https://gromacs.bioexcel.eu/c/gromacs-user-forum>`__,
+but then you should include a detailed description of
your hardware, and the output of ``gmx mdrun -version`` (which contains
valuable diagnostic information in the header).
+Testing for MDRUN_ONLY executables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
A build with ``-DGMX_BUILD_MDRUN_ONLY`` cannot be tested with
``make check`` from the build tree, because most of the tests
require a full build to run things like ``grompp``. To test such an
mkdir build-normal
cd build-normal
- cmake .. -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
+ # First, build and install normally to allow full testing of the standalone simulator.
+ cmake .. -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
make -j 4
make install
cd ..
mkdir build-mdrun-only
cd build-mdrun-only
- cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
+ # Next, build and install the GMX_BUILD_MDRUN_ONLY version (optional).
+ cmake .. -DGMX_MPI=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
make -j 4
make install
cd /to/your/unpacked/regressiontests
source /your/installation/prefix/here/bin/GMXRC
./gmxtest.pl all -np 2
+Non-standard suffix
+~~~~~~~~~~~~~~~~~~~
+
If your mdrun program has been suffixed in a non-standard way, then
the ``./gmxtest.pl -mdrun`` option will let you specify that name to the
test machinery. You can use ``./gmxtest.pl -double`` to test the
be run. You can use ``./gmxtest.pl -mpirun srun`` if your command to
run an MPI program is called ``srun``.
+Running MPI-enabled tests
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
The ``make check`` target also runs integration-style tests that may run
with MPI if ``GMX_MPI=ON`` was set. To make these work with various possible
MPI libraries, you may need to
it works because we have tested it.
Every commit in our git source code repository
is currently tested with a range of configuration options on x86 with
-gcc versions 6 and 7,
-clang versions 3.6 and 8,
+gcc versions 7 and 8,
+clang versions 8 and 9,
and
-For this testing, we use Ubuntu 16.04 or 18.04 operating system.
+a beta version of oneAPI containing Intel's compiler.
+For this testing, we use Ubuntu 18.04 or 20.04 operating system.
Other compiler, library, and OS versions are tested less frequently.
-For details, you can
-have a look at the `continuous integration server used by GROMACS`_,
+For details, you can have a look at the
+`continuous integration server used by GROMACS <gitlab>`_,
which uses GitLab runner on a local k8s x86 cluster with NVIDIA and
AMD GPU support.
-We test irregularly on ARM v7, ARM v8, Cray, Fujitsu
-PRIMEHPC, Power8, Power9,
+We test irregularly on ARM v8, Cray, Power8, Power9,
Google Native Client and other environments, and
with other compilers and compiler versions, too.