3 Getting good performance from :ref:`mdrun <gmx mdrun>`
4 ======================================================
6 Here we give an overview on the parallelization and acceleration schemes employed by |Gromacs|.
7 The aim is to provide an understanding of the underlying mechanisms that make |Gromacs| one of the
8 fastest molecular dynamics packages. The information presented
9 should help choosing appropriate parallelization options, run configuration,
10 as well as acceleration options to achieve optimal simulation performance.
13 The |Gromacs| build system and the :ref:`gmx mdrun` tool have a lot of built-in
14 and configurable intelligence to detect your hardware and make pretty
15 effective use of it. For a lot of casual and serious use of
16 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
17 most from your hardware to maximize your scientific quality, read on!
19 Hardware background information
20 -------------------------------
21 Modern computer hardware is complex and heterogeneous, so we need to
22 discuss a little bit of background information and set up some
23 definitions. Experienced HPC users can skip this section.
28 A hardware compute unit that actually executes
29 instructions. There is normally more than one core in a
30 processor, often many more.
33 A special kind of memory local to core(s) that is much faster
34 to access than main memory, kind of like the top of a human's
35 desk, compared to their filing cabinet. There are often
36 several layers of caches associated with a core.
39 A group of cores that share some kind of locality, such as a
40 shared cache. This makes it more efficient to spread
41 computational work over cores within a socket than over cores
42 in different sockets. Modern processors often have more than
46 A group of sockets that share coarser-level locality, such as
47 shared access to the same memory without requiring any network
48 hardware. A normal laptop or desktop computer is a node. A
49 node is often the smallest amount of a large compute cluster
50 that a user can request to use.
53 A stream of instructions for a core to execute. There are many
54 different programming abstractions that create and manage
55 spreading computation over multiple threads, such as OpenMP,
56 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
57 hardware can map more than one software thread to a core; on
58 Intel x86 processors this is called "hyper-threading", while
59 the more general concept is often called SMT for
60 "simultaneous multi-threading". IBM Power8 can for instance use
61 up to 8 hardware threads per core.
62 This feature can usually be enabled or disabled either in
63 the hardware bios or through a setting in the Linux operating
64 system. |Gromacs| can typically make use of this, for a moderate
65 free performance boost. In most cases it will be
66 enabled by default e.g. on new x86 processors, but in some cases
67 the system administrators might have disabled it. If that is the
68 case, ask if they can re-enable it for you. If you are not sure
69 if it is enabled, check the output of the CPU information in
70 the log file and compare with CPU specifications you find online.
72 thread affinity (pinning)
73 By default, most operating systems allow software threads to migrate
74 between cores (or hardware threads) to help automatically balance
75 workload. However, the performance of :ref:`gmx mdrun` can deteriorate
76 if this is permitted and will degrade dramatically especially when
77 relying on multi-threading within a rank. To avoid this,
78 :ref:`gmx mdrun` will by default
79 set the affinity of its threads to individual cores/hardware threads,
80 unless the user or software environment has already done so
81 (or not the entire node is used for the run, i.e. there is potential
83 Setting thread affinity is sometimes called thread "pinning".
86 The dominant multi-node parallelization-scheme, which provides
87 a standardized language in which programs can be written that
88 work across more than one node.
91 In MPI, a rank is the smallest grouping of hardware used in
92 the multi-node parallelization scheme. That grouping can be
93 controlled by the user, and might correspond to a core, a
94 socket, a node, or a group of nodes. The best choice varies
95 with the hardware, software and compute task. Sometimes an MPI
96 rank is called an MPI process.
99 A graphics processing unit, which is often faster and more
100 efficient than conventional processors for particular kinds of
101 compute workloads. A GPU is always associated with a
102 particular node, and often a particular socket within that
106 A standardized technique supported by many compilers to share
107 a compute workload over multiple cores. Often combined with
108 MPI to achieve hybrid MPI/OpenMP parallelism.
111 A proprietary parallel computing framework and API developed by NVIDIA
112 that allows targeting their accelerator hardware.
113 |Gromacs| uses CUDA for GPU acceleration support with NVIDIA hardware.
116 An open standard-based parallel computing framework that consists
117 of a C99-based compiler and a programming API for targeting heterogeneous
118 and accelerator hardware. |Gromacs| uses OpenCL for GPU acceleration
119 on AMD devices (both GPUs and APUs) and Intel integrated GPUs; NVIDIA
120 hardware is also supported.
123 A type of CPU instruction by which modern CPU cores can execute multiple
124 floating-point instructions in a single cycle.
127 Work distribution by parallelization in |Gromacs|
128 -------------------------------------------------
130 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
131 when choosing how to make good use of the hardware. For details,
132 see the :ref:`Reference Manual <gmx-reference-manual-rst>`. The most important of these are
134 .. _gmx-domain-decomp:
139 The domain decomposition (DD) algorithm decomposes the
140 (short-ranged) component of the non-bonded interactions into
141 domains that share spatial locality, which permits the use of
142 efficient algorithms. Each domain handles all of the
143 particle-particle (PP) interactions for its members, and is
144 mapped to a single MPI rank. Within a PP rank, OpenMP threads
145 can share the workload, and some work can be offloaded to a
146 GPU. The PP rank also handles any bonded interactions for the
147 members of its domain. A GPU may perform work for more than
148 one PP rank, but it is normally most efficient to use a single
149 PP rank per GPU and for that rank to have thousands of
150 particles. When the work of a PP rank is done on the CPU,
151 :ref:`mdrun <gmx mdrun>` will make extensive use of the SIMD
152 capabilities of the core. There are various
153 :ref:`command-line options <controlling-the-domain-decomposition-algorithm>`
154 to control the behaviour of the DD algorithm.
157 The particle-mesh Ewald (PME) algorithm treats the long-ranged
158 component of the non-bonded interactions (Coulomb and possibly also
159 Lennard-Jones). Either all, or just a subset of ranks may
160 participate in the work for computing the long-ranged component
161 (often inaccurately called simply the "PME"
162 component). Because the algorithm uses a 3D FFT that requires
163 global communication, its parallel efficiency gets worse as more ranks
164 participate, which can mean it is fastest to use just a subset
165 of ranks (e.g. one-quarter to one-half of the ranks). If
166 there are separate PME ranks, then the remaining ranks handle
167 the PP work. Otherwise, all ranks do both PP and PME work.
169 Parallelization schemes
170 -----------------------
172 |Gromacs|, being performance-oriented, has a strong focus on efficient parallelization.
173 There are multiple parallelization schemes available, therefore a simulation can be run on a
174 given hardware with different choices of run configuration.
176 .. _intra-core-parallelization:
178 Intra-core parallelization via SIMD: SSE, AVX, etc.
179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
181 One level of performance improvement available in |Gromacs| is through the use of
182 ``Single Instruction Multiple Data (SIMD)`` instructions. In detail information
183 for those can be found under :ref:`SIMD support <gmx-simd-support>` in the installation
186 In |Gromacs|, SIMD instructions are used to parallelize the parts of the code with
187 the highest impact on performance (nonbonded and bonded force calculation,
188 PME and neighbour searching), through the use of hardware specific SIMD kernels.
189 Those form one of the three levels of non-bonded kernels that are available: reference or generic
190 kernels (slow but useful for producing reference values for testing),
191 optimized plain-C kernels (can be used cross-platform but still slow)
192 and SIMD intrinsics accelerated kernels.
194 The SIMD intrinsic code is compiled by the compiler.
195 Technically, it is possible to compile different levels of acceleration into one binary,
196 but this is difficult to manage with acceleration in many parts of the code.
197 Thus, you need to configure and compile |Gromacs| for the SIMD capabilities of the target CPU.
198 By default, the build system will detect the highest supported
199 acceleration of the host where the compilation is carried out. For cross-compiling for
200 a machine with a different highest SIMD instructions set, in order to set the target acceleration,
201 the ``-DGMX_SIMD`` CMake option can be used.
203 installation on multiple different machines, it is convenient to compile the analysis tools with
204 the lowest common SIMD instruction set (as these rely little on SIMD acceleration), but for best
205 performance :ref:`mdrun <gmx mdrun>` should be compiled be compiled separately with the
206 highest (latest) ``native`` SIMD instruction set of the target architecture (supported by |Gromacs|).
208 Recent Intel CPU architectures bring tradeoffs between the maximum clock frequency of the
209 CPU (ie. its speed), and the width of the SIMD instructions it executes (ie its throughput
210 at a given speed). In particular, the Intel ``Skylake`` and ``Cascade Lake`` processors
211 (e.g. Xeon SP Gold/Platinum), can offer better throughput when using narrower SIMD because
212 of the better clock frequency available. Consider building :ref:`mdrun <gmx mdrun>`
213 configured with ``GMX_SIMD=AVX2_256`` instead of ``GMX_SIMD=AVX512`` for better
214 performance in GPU accelerated or highly parallel MPI runs.
216 Process(-or) level parallelization via OpenMP
217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
219 |Gromacs| :ref:`mdrun <gmx mdrun>` supports OpenMP multithreading for all parts
220 of the code. OpenMP is enabled by default and
221 can be turned on/off at configure time with the ``GMX_OPENMP`` CMake variable
222 and at run-time with the ``-ntomp`` option (or the ``OMP_NUM_THREADS`` environment variable).
223 The OpenMP implementation is quite efficient and scales well for up to 12-24 threads on
224 Intel and 6-8 threads on AMD CPUs.
226 Node level parallelization via GPU offloading and thread-MPI
227 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
229 Multithreading with thread-MPI
230 ..............................
232 The thread-MPI library implements a subset of the MPI 1.1 specification,
233 based on the system threading support. Both POSIX pthreads and Windows threads are supported,
234 thus providing great portability to most UNIX/Linux and Windows operating systems.
235 Acting as a drop-in replacement for MPI, thread-MPI enables compiling and running :ref:`mdrun <gmx mdrun>`
236 on a single machine (i.e. not across a network) without MPI. Additionally, it not only provides a
237 convenient way to use computers with multicore CPU(s), but thread-MPI does in some
238 cases make :ref:`mdrun <gmx mdrun>` run slightly faster than with MPI.
240 Thread-MPI is included in the |Gromacs| source and it is the default parallelization since
241 version 4.5, practically rendering the serial :ref:`mdrun <gmx mdrun>` deprecated.
242 Compilation with thread-MPI is controlled by the ``GMX_THREAD_MPI`` CMake variable.
244 Thread-MPI is compatible with most :ref:`mdrun <gmx mdrun>` features and parallelization schemes,
245 including OpenMP, GPUs; it is not compatible with MPI and multi-simulation runs.
247 By default, the thread-MPI mdrun will use all available cores in the machine by starting
248 an appropriate number of ranks or OpenMP threads to occupy all of them. The number of
249 ranks can be controlled using the
250 ``-nt`` and ``-ntmpi`` options. ``-nt`` represents the total number of threads
251 to be used (which can be a mix of thread-MPI and OpenMP threads.
253 Hybrid/heterogeneous acceleration
254 .................................
256 Hybrid acceleration means distributing compute work between available CPUs and GPUs
257 to improve simulation performance. New non-bonded algorithms
258 have been developed with the aim of efficient acceleration both on CPUs and GPUs.
260 The most compute-intensive parts of simulations, non-bonded force calculation, as well
261 as possibly the PME, bonded force calculation and update and constraints can be
262 offloaded to GPUs and carried out simultaneously with remaining CPU work.
263 Native GPU acceleration is supported for the most commonly used algorithms in
265 For more information about the GPU kernels, please see the :ref:`Installation guide <gmx-gpu-support>`.
267 The native GPU acceleration can be turned on or off, either at run-time using the
268 :ref:`mdrun <gmx mdrun>` ``-nb`` option, or at configuration time using the ``GMX_GPU`` CMake variable.
270 To efficiently use all compute resource available, CPU and GPU computation is done simultaneously.
271 Overlapping with the OpenMP multithreaded bonded force and PME long-range electrostatic calculations
272 on the CPU, non-bonded forces are calculated on the GPU. Multiple GPUs, both in a single node as
273 well as across multiple nodes, are supported using domain-decomposition. A single GPU is assigned
274 to the non-bonded workload of a domain, therefore, the number GPUs used has to match the number
275 of of MPI processes (or thread-MPI threads) the simulation is started with. The available
276 CPU cores are partitioned among the processes (or thread-MPI threads) and a set of cores
277 with a GPU do the calculations on the respective domain.
279 With PME electrostatics, :ref:`mdrun <gmx mdrun>` supports automated CPU-GPU load-balancing by
280 shifting workload from the PME mesh calculations, done on the CPU, to the particle-particle
281 non-bonded calculations, done on the GPU. At startup a few iterations of tuning are executed
282 during the first 100 to 1000 MD steps. These iterations involve scaling the electrostatics cut-off
283 and PME grid spacing to determine the value that gives optimal CPU-GPU load balance. The cut-off
284 value provided using the :mdp:`rcoulomb` ``=rvdw`` :ref:`mdp` option represents the minimum
285 electrostatics cut-off the tuning starts with and therefore should be chosen as small as
286 possible (but still reasonable for the physics simulated). The Lennard-Jones cut-off ``rvdw``
287 is kept fixed. We don't allow scaling to shorter cut-off as we don't want to change ``rvdw``
288 and there would be no performance gain.
290 While the automated CPU-GPU load balancing always attempts to find the optimal cut-off setting,
291 it might not always be possible to balance CPU and GPU workload. This happens when the CPU threads
292 finish calculating the bonded forces and PME faster than the GPU the non-bonded force calculation,
293 even with the shortest possible cut-off. In such cases the CPU will wait for the GPU and this
294 time will show up as ``Wait GPU local`` in the cycle and timing summary table at the end
297 Parallelization over multiple nodes via MPI
298 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
300 At the heart of the MPI parallelization in |Gromacs| is the neutral-territory
301 :ref:`domain decomposition <gmx-domain-decomp>` with dynamic load balancing.
302 To parallelize simulations across multiple machines (e.g. nodes of a cluster)
303 :ref:`mdrun <gmx mdrun>` needs to be compiled with MPI which can be enabled using the ``GMX_MPI`` CMake variable.
305 .. _controlling-the-domain-decomposition-algorithm:
307 Controlling the domain decomposition algorithm
308 ..............................................
310 This section lists options that affect how the domain
311 decomposition algorithm decomposes the workload to the available
315 Can be used to set the required maximum distance for inter
316 charge-group bonded interactions. Communication for two-body
317 bonded interactions below the non-bonded cut-off distance always
318 comes for free with the non-bonded communication. Particles beyond
319 the non-bonded cut-off are only communicated when they have
320 missing bonded interactions; this means that the extra cost is
321 minor and nearly independent of the value of ``-rdd``. With dynamic
322 load balancing, option ``-rdd`` also sets the lower limit for the
323 domain decomposition cell sizes. By default ``-rdd`` is determined
324 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
325 be a balance between interaction range and communication cost.
328 On by default. When inter charge-group bonded interactions are
329 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
330 error message. For pair interactions and tabulated bonds that do
331 not generate exclusions, this check can be turned off with the
332 option ``-noddcheck``.
335 When constraints are present, option ``-rcon`` influences
336 the cell size limit as well.
337 Particles connected by NC constraints, where NC is the LINCS order
338 plus 1, should not be beyond the smallest cell size. A error
339 message is generated when this happens, and the user should change
340 the decomposition or decrease the LINCS order and increase the
341 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
342 minimum cell size required for P-LINCS in a conservative
343 fashion. For high parallelization, it can be useful to set the
344 distance required for P-LINCS with ``-rcon``.
347 Sets the minimum allowed x, y and/or z scaling of the cells with
348 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
349 scale down by at least this factor. This option is used for the
350 automated spatial decomposition (when not using ``-dd``) as well as
351 for determining the number of grid pulses, which in turn sets the
352 minimum allowed cell size. Under certain circumstances the value
353 of ``-dds`` might need to be adjusted to account for high or low
354 spatial inhomogeneity of the system.
358 Multi-level parallelization: MPI and OpenMP
359 ...........................................
361 The multi-core trend in CPU development substantiates the need for multi-level parallelization.
362 Current multiprocessor machines can have 2-4 CPUs with a core count as high as 64. As the memory
363 and cache subsystem is lagging more and more behind the multicore evolution, this emphasizes
364 non-uniform memory access (NUMA) effects, which can become a performance bottleneck. At the same
365 time, all cores share a network interface. In a purely MPI-parallel scheme, all MPI processes
366 use the same network interface, and although MPI intra-node communication is generally efficient,
367 communication between nodes can become a limiting factor to parallelization. This is especially
368 pronounced in the case of highly parallel simulations with PME (which is very communication
369 intensive) and with ``''fat''`` nodes connected by a slow network. Multi-level parallelism aims
370 to address the NUMA and communication related issues by employing efficient
371 intra-node parallelism, typically multithreading.
373 Combining OpenMP with MPI creates an additional overhead
374 especially when running separate multi-threaded PME ranks. Depending on the architecture,
375 input system size, as well as other factors, MPI+OpenMP runs can be as fast and faster
376 already at small number of processes (e.g. multi-processor Intel Westmere or Sandy Bridge),
377 but can also be considerably slower (e.g. multi-processor AMD Interlagos machines). However,
378 there is a more pronounced benefit of multi-level parallelization in highly parallel runs.
383 On CPU ranks, particle-particle (PP) and PME calculations are done in the same process one after
384 another. As PME requires all-to-all global communication, this is most of the time the limiting
385 factor to scaling on a large number of cores. By designating a subset of ranks for PME
386 calculations only, performance of parallel runs can be greatly improved.
388 OpenMP mutithreading in PME ranks is also possible.
389 Using multi-threading in PME can can improve performance at high
390 parallelization. The reason for this is that with N>1 threads the number of processes
391 communicating, and therefore the number of messages, is reduced by a factor of N.
392 But note that modern communication networks can process several messages simultaneously,
393 such that it could be advantageous to have more processes communicating.
395 Separate PME ranks are not used at low parallelization, the switch at higher parallelization
396 happens automatically (at > 16 processes). The number of PME ranks is estimated by mdrun.
397 If the PME load is higher than the PP load, mdrun will automatically balance the load, but
398 this leads to additional (non-bonded) calculations. This avoids the idling of a large fraction
399 of the ranks; usually 3/4 of the ranks are PP ranks. But to ensure the best absolute performance
400 of highly parallel runs, it is advisable to tweak this number which is automated by
401 the :ref:`tune_pme <gmx tune_pme>` tool.
403 The number of PME ranks can be set manually on the :ref:`mdrun <gmx mdrun>` command line using the ``-npme``
404 option, the number of PME threads can be specified on the command line with ``-ntomp_pme`` or
405 alternatively using the ``GMX_PME_NUM_THREADS`` environment variable. The latter is especially
406 useful when running on compute nodes with different number of cores as it enables
407 setting different number of PME threads on different nodes.
409 Running :ref:`mdrun <gmx mdrun>` within a single node
410 -----------------------------------------------------
412 :ref:`gmx mdrun` can be configured and compiled in several different ways that
413 are efficient to use within a single :term:`node`. The default configuration
414 using a suitable compiler will deploy a multi-level hybrid parallelism
415 that uses CUDA, OpenMP and the threading platform native to the
416 hardware. For programming convenience, in |Gromacs|, those native
417 threads are used to implement on a single node the same MPI scheme as
418 would be used between nodes, but much more efficient; this is called
419 thread-MPI. From a user's perspective, real MPI and thread-MPI look
420 almost the same, and |Gromacs| refers to MPI ranks to mean either kind,
421 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
422 a single node, but runs more slowly than the thread-MPI version.
424 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
425 and do its best to make fairly efficient use of the whole node. The
426 log file, stdout and stderr are used to print diagnostics that
427 inform the user about the choices made and possible consequences.
429 A number of command-line parameters are available to modify the default
433 The total number of threads to use. The default, 0, will start as
434 many threads as available cores. Whether the threads are
435 thread-MPI ranks, and/or OpenMP threads within such ranks depends on
439 The total number of thread-MPI ranks to use. The default, 0,
440 will start one rank per GPU (if present), and otherwise one rank
444 The total number of OpenMP threads per rank to start. The
445 default, 0, will start one thread on each available core.
446 Alternatively, :ref:`mdrun <gmx mdrun>` will honor the appropriate system
447 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
448 Note that the maximum number of OpenMP threads (per rank) is,
449 for efficiency reasons, limited to 64. While it is rarely beneficial to use
450 a number of threads higher than this, the GMX_OPENMP_MAX_THREADS CMake variable
451 can be used to increase the limit.
454 The total number of ranks to dedicate to the long-ranged
455 component of PME, if used. The default, -1, will dedicate ranks
456 only if the total number of threads is at least 12, and will use
457 around a quarter of the ranks for the long-ranged component.
460 When using PME with separate PME ranks,
461 the total number of OpenMP threads per separate PME rank.
462 The default, 0, copies the value from ``-ntomp``.
465 Can be set to "auto," "on" or "off" to control whether
466 :ref:`mdrun <gmx mdrun>` will attempt to set the affinity of threads to cores.
467 Defaults to "auto," which means that if :ref:`mdrun <gmx mdrun>` detects that all the
468 cores on the node are being used for :ref:`mdrun <gmx mdrun>`, then it should behave
469 like "on," and attempt to set the affinities (unless they are
470 already set by something else).
473 If ``-pin on``, specifies the logical core number to
474 which :ref:`mdrun <gmx mdrun>` should pin the first thread. When running more than
475 one instance of :ref:`mdrun <gmx mdrun>` on a node, use this option to to avoid
476 pinning threads from different :ref:`mdrun <gmx mdrun>` instances to the same core.
479 If ``-pin on``, specifies the stride in logical core
480 numbers for the cores to which :ref:`mdrun <gmx mdrun>` should pin its threads. When
481 running more than one instance of :ref:`mdrun <gmx mdrun>` on a node, use this option
482 to avoid pinning threads from different :ref:`mdrun <gmx mdrun>` instances to the
483 same core. Use the default, 0, to minimize the number of threads
484 per physical core - this lets :ref:`mdrun <gmx mdrun>` manage the hardware-, OS- and
485 configuration-specific details of how to map logical cores to
489 Can be set to "interleave," "pp_pme" or "cartesian."
490 Defaults to "interleave," which means that any separate PME ranks
491 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
492 PME, etc. This generally makes the best use of the available
493 hardware. "pp_pme" maps all PP ranks first, then all PME
494 ranks. "cartesian" is a special-purpose mapping generally useful
495 only on special torus networks with accelerated global
496 communication for Cartesian communicators. Has no effect if there
497 are no separate PME ranks.
500 Used to set where to execute the short-range non-bonded interactions.
501 Can be set to "auto", "cpu", "gpu."
502 Defaults to "auto," which uses a compatible GPU if available.
503 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
504 that a compatible GPU is available and will be used.
507 Used to set where to execute the long-range non-bonded interactions.
508 Can be set to "auto", "cpu", "gpu."
509 Defaults to "auto," which uses a compatible GPU if available.
510 Setting "gpu" requires that a compatible GPU is available.
511 Multiple PME ranks are not supported with PME on GPU, so if a GPU is used
512 for the PME calculation -npme must be set to 1.
515 Used to set where to execute the bonded interactions that are part of the
516 PP workload for a domain.
517 Can be set to "auto", "cpu", "gpu."
518 Defaults to "auto," which uses a compatible CUDA GPU only when one
519 is available, a GPU is handling short-ranged interactions, and the
520 CPU is handling long-ranged interaction work (electrostatic or
521 LJ). The work for the bonded interactions takes place on the same
522 GPU as the short-ranged interactions, and cannot be independently
524 Setting "gpu" requires that a compatible GPU is available and will
528 Used to set where to execute update and constraints, when present.
529 Can be set to "auto", "cpu", "gpu."
530 Defaults to "auto," which currently always uses the CPU.
531 Setting "gpu" requires that a compatible CUDA GPU is available.
532 Update and constraints on a GPU is currently not supported
533 with free-energy, domain decomposition, virtual sites,
534 Ewald surface correction, replica exchange, the pull code,
535 orientation restraints and computational electrophysiology.
538 A string that specifies the ID numbers of the GPUs that
539 are available to be used by ranks on each node. For example,
540 "12" specifies that the GPUs with IDs 1 and 2 (as reported
541 by the GPU runtime) can be used by :ref:`mdrun <gmx mdrun>`. This is useful
542 when sharing a node with other computations, or if a GPU that
543 is dedicated to a display should not be used by |Gromacs|.
544 Without specifying this parameter, :ref:`mdrun <gmx mdrun>`
545 will utilize all GPUs. When many GPUs are
546 present, a comma may be used to separate the IDs, so
547 "12,13" would make GPUs 12 and 13 available to :ref:`mdrun <gmx mdrun>`.
548 It could be necessary to use different GPUs on different
549 nodes of a simulation, in which case the environment
550 variable ``GMX_GPU_ID`` can be set differently for the ranks
551 on different nodes to achieve that result.
552 In |Gromacs| versions preceding 2018 this parameter used to
553 specify both GPU availability and GPU task assignment.
554 The latter is now done with the ``-gputasks`` parameter.
557 A string that specifies the ID numbers of the GPUs to be
558 used by corresponding GPU tasks on this node. For example,
559 "0011" specifies that the first two GPU tasks will use GPU 0,
560 and the other two use GPU 1. When using this option, the
561 number of ranks must be known to :ref:`mdrun <gmx mdrun>`, as well as where
562 tasks of different types should be run, such as by using
563 ``-nb gpu`` - only the tasks which are set to run on GPUs
564 count for parsing the mapping. See `Assigning tasks to GPUs`_
565 for more details. Note that ``-gpu_id`` and
566 ``-gputasks`` can not be used at the same time!
567 In |Gromacs| versions preceding 2018 only a single type
568 of GPU task ("PP") could be run on any rank. Now that there is some
569 support for running PME on GPUs, the number of GPU tasks
570 (and the number of GPU IDs expected in the ``-gputasks`` string)
571 can actually be 3 for a single-rank simulation. The IDs
572 still have to be the same in this case, as using multiple GPUs
573 per single rank is not yet implemented.
574 The order of GPU tasks per rank in the string is PP first,
575 PME second. The order of ranks with different kinds of GPU tasks
576 is the same by default, but can be influenced with the ``-ddorder``
577 option and gets quite complex when using multiple nodes.
578 Note that the bonded interactions for a PP task may
579 run on the same GPU as the short-ranged work, or on the CPU,
580 which can be controlled with the ``-bonded`` flag.
581 The GPU task assignment (whether manually set, or automated),
582 will be reported in the :ref:`mdrun <gmx mdrun>` output on
583 the first physical node of the simulation. For example:
587 gmx mdrun -gputasks 0001 -nb gpu -pme gpu -npme 1 -ntmpi 4
589 will produce the following output in the log file/terminal:
593 On host tcbl14 2 GPUs selected for this run.
594 Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
597 In this case, 3 ranks are set by user to compute PP work
598 on GPU 0, and 1 rank to compute PME on GPU 1.
599 The detailed indexing of the GPUs is also reported in the log file.
601 For more information about GPU tasks, please refer to
602 :ref:`Types of GPU tasks<gmx-gpu-tasks>`.
605 Allows choosing whether to execute the 3D FFT computation on a CPU or GPU.
606 Can be set to "auto", "cpu", "gpu.".
607 When PME is offloaded to a GPU ``-pmefft gpu`` is the default,
608 and the entire PME calculation is executed on the GPU. However,
609 in some cases, e.g. with a relatively slow or older generation GPU
610 combined with fast CPU cores in a run, moving some work off of the GPU
611 back to the CPU by computing FFTs on the CPU can improve performance.
613 .. _gmx-mdrun-single-node:
615 Examples for :ref:`mdrun <gmx mdrun>` on one node
616 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
622 Starts :ref:`mdrun <gmx mdrun>` using all the available resources. :ref:`mdrun <gmx mdrun>`
623 will automatically choose a fairly efficient division
624 into thread-MPI ranks, OpenMP threads and assign work
625 to compatible GPUs. Details will vary with hardware
626 and the kind of simulation being run.
632 Starts :ref:`mdrun <gmx mdrun>` using 8 threads, which might be thread-MPI
633 or OpenMP threads depending on hardware and the kind
634 of simulation being run.
638 gmx mdrun -ntmpi 2 -ntomp 4
640 Starts :ref:`mdrun <gmx mdrun>` using eight total threads, with two thread-MPI
641 ranks and four OpenMP threads per rank. You should only use
642 these options when seeking optimal performance, and
643 must take care that the ranks you create can have
644 all of their OpenMP threads run on the same socket.
645 The number of ranks should be a multiple of the number of
646 sockets, and the number of cores per node should be
647 a multiple of the number of threads per rank.
651 gmx mdrun -ntmpi 4 -nb gpu -pme cpu
653 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks. The CPU
654 cores available will be split evenly between the ranks using OpenMP
655 threads. The long-range component of the forces are calculated on
656 CPUs. This may be optimal on hardware where the CPUs are relatively
657 powerful compared to the GPUs. The bonded part of force calculation
658 will automatically be assigned to the GPU, since the long-range
659 component of the forces are calculated on CPU(s).
663 gmx mdrun -ntmpi 1 -nb gpu -pme gpu -bonded gpu
665 Starts :ref:`mdrun <gmx mdrun>` using a single thread-MPI rank that
666 will use all available CPU cores. All interaction types that can run
667 on a GPU will do so. This may be optimal on hardware where the CPUs
668 are extremely weak compared to the GPUs.
672 gmx mdrun -ntmpi 4 -nb gpu -pme cpu -gputasks 0011
674 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks, and maps them
675 to GPUs with IDs 0 and 1. The CPU cores available will be split evenly between
676 the ranks using OpenMP threads, with the first two ranks offloading short-range
677 nonbonded force calculations to GPU 0, and the last two ranks offloading to GPU 1.
678 The long-range component of the forces are calculated on CPUs. This may be optimal
679 on hardware where the CPUs are relatively powerful compared to the GPUs.
683 gmx mdrun -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0001
685 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks, one of which is
686 dedicated to the long-range PME calculation. The first 3 threads offload their
687 short-range non-bonded calculations to the GPU with ID 0, the 4th (PME) thread
688 offloads its calculations to the GPU with ID 1.
692 gmx mdrun -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0011
694 Similar to the above example, with 3 ranks assigned to calculating short-range
695 non-bonded forces, and one rank assigned to calculate the long-range forces.
696 In this case, 2 of the 3 short-range ranks offload their nonbonded force
697 calculations to GPU 0. The GPU with ID 1 calculates the short-ranged forces of
698 the 3rd short-range rank, as well as the long-range forces of the PME-dedicated
699 rank. Whether this or the above example is optimal will depend on the capabilities
700 of the individual GPUs and the system composition.
706 Starts :ref:`mdrun <gmx mdrun>` using GPUs with IDs 1 and 2 (e.g. because
707 GPU 0 is dedicated to running a display). This requires
708 two thread-MPI ranks, and will split the available
709 CPU cores between them using OpenMP threads.
713 gmx mdrun -nt 6 -pin on -pinoffset 0 -pinstride 1
714 gmx mdrun -nt 6 -pin on -pinoffset 6 -pinstride 1
716 Starts two :ref:`mdrun <gmx mdrun>` processes, each with six total threads
717 arranged so that the processes affect each other as little as possible by
718 being assigned to disjoint sets of physical cores.
719 Threads will have their affinities set to particular
720 logical cores, beginning from the first and 7th logical cores, respectively. The
721 above would work well on an Intel CPU with six physical cores and
722 hyper-threading enabled. Use this kind of setup only
723 if restricting :ref:`mdrun <gmx mdrun>` to a subset of cores to share a
724 node with other processes.
725 A word of caution: The mapping of logical CPUs/cores to physical
726 cores may differ between operating systems. On Linux,
727 ``cat /proc/cpuinfo`` can be examined to determine this mapping.
731 mpirun -np 2 gmx_mpi mdrun
733 When using an :ref:`gmx mdrun` compiled with external MPI,
734 this will start two ranks and as many OpenMP threads
735 as the hardware and MPI setup will permit. If the
736 MPI setup is restricted to one node, then the resulting
737 :ref:`gmx mdrun` will be local to that node.
739 .. _gmx-mdrun-multiple-nodes:
741 Running :ref:`mdrun <gmx mdrun>` on more than one node
742 ------------------------------------------------------
744 This requires configuring |Gromacs| to build with an external MPI
745 library. By default, this :ref:`mdrun <gmx mdrun>` executable is run with
746 :ref:`mdrun_mpi`. All of the considerations for running single-node
747 :ref:`mdrun <gmx mdrun>` still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
748 error, and instead the number of ranks is controlled by the
750 Settings such as ``-npme`` are much more important when
751 using multiple nodes. Configuring the MPI environment to
752 produce one rank per core is generally good until one
753 approaches the strong-scaling limit. At that point, using
754 OpenMP to spread the work of an MPI rank over more than one
755 core is needed to continue to improve absolute performance.
756 The location of the scaling limit depends on the processor,
757 presence of GPUs, network, and simulation algorithm, but
758 it is worth measuring at around ~200 particles/core if you
759 need maximum throughput.
761 There are further command-line parameters that are relevant in these
765 Defaults to "on." If "on," a simulation will
766 optimize various aspects of the PME and DD algorithms, shifting
767 load between ranks and/or GPUs to maximize throughput. Some
768 :ref:`mdrun <gmx mdrun>` features are not compatible with this, and these ignore
772 Can be set to "auto," "no," or "yes."
773 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
774 is needed to maximize performance. This is particularly important
775 for molecular systems with heterogeneous particle or interaction
776 density. When a certain threshold for performance loss is
777 exceeded, DLB activates and shifts particles between ranks to improve
778 performance. If available, using ``-bonded gpu`` is expected
779 to improve the ability of DLB to maximize performance.
781 During the simulation :ref:`gmx mdrun` must communicate between all
782 PP ranks to compute quantities such as kinetic energy for log file
783 reporting, or perhaps temperature coupling. By default, this happens
784 whenever necessary to honor several :ref:`mdp options <mdp-general>`,
785 so that the period between communication phases is the least common
786 denominator of :mdp:`nstlist`, :mdp:`nstcalcenergy`,
787 :mdp:`nsttcouple`, and :mdp:`nstpcouple`.
789 Note that ``-tunepme`` has more effect when there is more than one
790 :term:`node`, because the cost of communication for the PP and PME
791 ranks differs. It still shifts load between PP and PME ranks, but does
792 not change the number of separate PME ranks in use.
794 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
795 if you experience performance variation that could result from this,
796 you may wish to tune PME separately, and run the result with ``mdrun
797 -notunepme -dlb yes``.
799 The :ref:`gmx tune_pme` utility is available to search a wider
800 range of parameter space, including making safe
801 modifications to the :ref:`tpr` file, and varying ``-npme``.
802 It is only aware of the number of ranks created by
803 the MPI environment, and does not explicitly manage
804 any aspect of OpenMP during the optimization.
806 Examples for :ref:`mdrun <gmx mdrun>` on more than one node
807 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
809 The examples and explanations for for single-node :ref:`mdrun <gmx mdrun>` are
810 still relevant, but ``-ntmpi`` is no longer the way
811 to choose the number of MPI ranks.
815 mpirun -np 16 gmx_mpi mdrun
817 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
818 the hardware by the MPI library, e.g. as specified
819 in an MPI hostfile. The available cores will be
820 automatically split among ranks using OpenMP threads,
821 depending on the hardware and any environment settings
822 such as ``OMP_NUM_THREADS``.
826 mpirun -np 16 gmx_mpi mdrun -npme 5
828 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
829 require that 5 of them are dedicated to the PME
834 mpirun -np 11 gmx_mpi mdrun -ntomp 2 -npme 6 -ntomp_pme 1
836 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
837 require that six of them are dedicated to the PME
838 component with one OpenMP thread each. The remaining
839 five do the PP component, with two OpenMP threads
844 mpirun -np 4 gmx_mpi mdrun -ntomp 6 -nb gpu -gputasks 00
846 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
847 four total ranks, each rank with six OpenMP threads,
848 and both ranks on a node sharing GPU with ID 0.
852 mpirun -np 8 gmx_mpi mdrun -ntomp 3 -gputasks 0000
854 Using a same/similar hardware as above,
855 starts :ref:`mdrun_mpi` on a machine with two nodes, using
856 eight total ranks, each rank with three OpenMP threads,
857 and all four ranks on a node sharing GPU with ID 0.
858 This may or may not be faster than the previous setup
859 on the same hardware.
863 mpirun -np 20 gmx_mpi mdrun -ntomp 4 -gputasks 00
865 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
866 across ranks each to one OpenMP thread. This setup is likely to be
867 suitable when there are ten nodes, each with one GPU, and each node
868 has two sockets each of four cores.
872 mpirun -np 10 gmx_mpi mdrun -gpu_id 1
874 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
875 across ranks each to one OpenMP thread. This setup is likely to be
876 suitable when there are ten nodes, each with two GPUs, but another
877 job on each node is using GPU 0. The job scheduler should set the
878 affinity of threads of both jobs to their allocated cores, or the
879 performance of :ref:`mdrun <gmx mdrun>` will suffer greatly.
883 mpirun -np 20 gmx_mpi mdrun -gpu_id 01
885 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
886 to be suitable when there are ten nodes, each with two
887 GPUs, but there is no need to specify ``-gpu_id`` for the
888 normal case where all the GPUs on the node are available
891 Approaching the scaling limit
892 -----------------------------
894 There are several aspects of running a |Gromacs| simulation that are important as the number
895 of atoms per core approaches the current scaling limit of ~100 atoms/core.
897 One of these is that the use of ``constraints = all-bonds`` with P-LINCS
898 sets an artificial minimum on the size of domains. You should reconsider the use
899 of constraints to all bonds (and bear in mind possible consequences on the safe maximum for dt),
900 or change lincs_order and lincs_iter suitably.
902 Finding out how to run :ref:`mdrun <gmx mdrun>` better
903 ------------------------------------------------------
905 The Wallcycle module is used for runtime performance measurement of :ref:`gmx mdrun`.
906 At the end of the log file of each run, the "Real cycle and time accounting" section
907 provides a table with runtime statistics for different parts of the :ref:`gmx mdrun` code
908 in rows of the table.
909 The table contains colums indicating the number of ranks and threads that
910 executed the respective part of the run, wall-time and cycle
911 count aggregates (across all threads and ranks) averaged over the entire run.
912 The last column also shows what precentage of the total runtime each row represents.
913 Note that the :ref:`gmx mdrun` timer resetting functionalities (`-resethway` and `-resetstep`)
914 reset the performance counters and therefore are useful to avoid startup overhead and
915 performance instability (e.g. due to load balancing) at the beginning of the run.
917 The performance counters are:
919 * Particle-particle during Particle mesh Ewald
920 * Domain decomposition
921 * Domain decomposition communication load
922 * Domain decomposition communication bounds
923 * Virtual site constraints
924 * Send X to Particle mesh Ewald
926 * Launch GPU operations
927 * Communication of coordinates
929 * Waiting + Communication of force
930 * Particle mesh Ewald
935 * PME 3D-FFT Communication
936 * PME solve Lennard-Jones
939 * PME wait for particle-particle
940 * Wait + Receive PME force
943 * Wait PME GPU spread
944 * Wait PME GPU gather
945 * Reduce PME GPU Force
946 * Non-bonded position/force buffer operations
947 * Virtual site spread
949 * AWH (accelerated weight histogram method)
953 * Communication of energies
955 * Add rotational forces
959 As performance data is collected for every run, they are essential to assessing
960 and tuning the performance of :ref:`gmx mdrun` performance. Therefore, they benefit
961 both code developers as well as users of the program.
962 The counters are an average of the time/cycles different parts of the simulation take,
963 hence can not directly reveal fluctuations during a single run (although comparisons across
964 multiple runs are still very useful).
966 Counters will appear in an MD log file only if the related parts of the code were
967 executed during the :ref:`gmx mdrun` run. There is also a special counter called "Rest" which
968 indicates the amount of time not accounted for by any of the counters above. Therefore,
969 a significant amount "Rest" time (more than a few percent) will often be an indication of
970 parallelization inefficiency (e.g. serial code) and it is recommended to be reported to the
973 An additional set of subcounters can offer more fine-grained inspection of performance. They are:
975 * Domain decomposition redistribution
976 * DD neighbor search grid + sort
977 * DD setup communication
979 * DD make constraints
981 * Neighbor search grid local
984 * NS search non-local
988 * Listed buffer operations
991 * Launch non-bonded GPU tasks
992 * Launch PME GPU tasks
993 * Ewald force correction
994 * Non-bonded position buffer operations
995 * Non-bonded force buffer operations
997 Subcounters are geared toward developers and have to be enabled during compilation. See
998 :doc:`/dev-manual/build-system` for more information.
1000 .. TODO In future patch:
1001 - red flags in log files, how to interpret wallcycle output
1002 - hints to devs how to extend wallcycles
1004 .. _gmx-mdrun-on-gpu:
1006 Running :ref:`mdrun <gmx mdrun>` with GPUs
1007 ------------------------------------------
1014 To better understand the later sections on different GPU use cases for
1015 calculation of :ref:`short range<gmx-gpu-pp>` and :ref:`PME <gmx-gpu-pme>`,
1016 we first introduce the concept of different GPU tasks. When thinking about
1017 running a simulation, several different kinds of interactions between the atoms
1018 have to be calculated (for more information please refer to the reference manual).
1019 The calculation can thus be split into several distinct parts that are largely independent
1020 of each other (hence can be calculated in any order, e.g. sequentially or concurrently),
1021 with the information from each of them combined at the end of
1022 time step to obtain the final forces on each atom and to propagate the system
1023 to the next time point. For a better understanding also please see the section
1024 on :ref:`domain decomposition <gmx-domain-decomp>`.
1026 Of all calculations required for an MD step,
1027 GROMACS aims to optimize performance bottom-up for each step
1028 from the lowest level (SIMD unit, cores, sockets, accelerators, etc.).
1029 Therefore many of the individual computation units are
1030 highly tuned for the lowest level of hardware parallelism: the SIMD units.
1031 Additionally, with GPU accelerators used as *co-processors*, some of the work
1032 can be *offloaded*, that is calculated simultaneously/concurrently with the CPU
1033 on the accelerator device, with the result being communicated to the CPU.
1034 Right now, |Gromacs| supports GPU accelerator offload of two tasks:
1035 the short-range :ref:`nonbonded interactions in real space <gmx-gpu-pp>`,
1036 and :ref:`PME <gmx-gpu-pme>`.
1038 **Please note that the solving of PME on GPU is still only the initial
1039 version supporting this behaviour, and comes with a set of limitations
1040 outlined further below.**
1042 Right now, we generally support short-range nonbonded offload with and
1043 without dynamic pruning on a wide range of GPU accelerators
1044 (both NVIDIA and AMD). This is compatible with the grand majority of
1045 the features and parallelization modes and can be used to scale to large machines.
1047 Simultaneously offloading both short-range nonbonded and long-range
1048 PME work to GPU accelerators is a new feature that that has some
1049 restrictions in terms of feature and parallelization
1050 compatibility (please see the :ref:`section below <gmx-pme-gpu-limitations>`).
1054 GPU computation of short range nonbonded interactions
1055 .....................................................
1057 .. TODO make this more elaborate and include figures
1059 Using the GPU for the short-ranged nonbonded interactions provides
1060 the majority of the available speed-up compared to run using only the CPU.
1061 Here, the GPU acts as an accelerator that can effectively parallelize
1062 this problem and thus reduce the calculation time.
1066 GPU accelerated calculation of PME
1067 ..................................
1069 .. TODO again, extend this and add some actual useful information concerning performance etc...
1071 |Gromacs| now allows the offloading of the PME calculation
1072 to the GPU, to further reduce the load on the CPU and improve usage overlap between
1073 CPU and GPU. Here, the solving of PME will be performed in addition to the calculation
1074 of the short range interactions on the same GPU as the short range interactions.
1076 .. _gmx-pme-gpu-limitations:
1081 **Please note again the limitations outlined below!**
1083 - PME GPU offload is supported on NVIDIA hardware with CUDA and AMD hardware with OpenCL.
1085 - Only a PME order of 4 is supported on GPUs.
1087 - PME will run on a GPU only when exactly one rank has a
1088 PME task, ie. decompositions with multiple ranks doing PME are not supported.
1090 - Only single precision is supported.
1092 - Free energy calculations where charges are perturbed are not supported,
1093 because only single PME grids can be calculated.
1095 - Only dynamical integrators are supported (ie. leap-frog, Velocity Verlet,
1096 stochastic dynamics)
1098 - LJ PME is not supported on GPUs.
1100 GPU accelerated calculation of bonded interactions (CUDA only)
1101 ..............................................................
1103 .. TODO again, extend this and add some actual useful information concerning performance etc...
1105 |Gromacs| now allows the offloading of the bonded part of the PP
1106 workload to a CUDA-compatible GPU. This is treated as part of the PP
1107 work, and requires that the short-ranged non-bonded task also runs on
1108 a GPU. It is an advantage usually only when the CPU is relatively weak
1109 compared with the GPU, perhaps because its workload is too large for
1110 the available cores. This would likely be the case for free-energy
1113 Assigning tasks to GPUs
1114 .......................
1116 Depending on which tasks should be performed on which hardware, different kinds of
1117 calculations can be combined on the same or different GPUs, according to the information
1118 provided for running :ref:`mdrun <gmx mdrun>`.
1120 It is possible to assign the calculation of the different computational tasks to the same GPU, meaning
1121 that they will share the computational resources on the same device, or to different processing units
1122 that will each perform one task each.
1124 One overview over the possible task assignments is given below:
1126 |Gromacs| version 2018:
1128 Two different types of assignable GPU accelerated tasks are available, NB and PME.
1129 Each PP rank has a NB task that can be offloaded to a GPU.
1130 If there is only one rank with a PME task (including if that rank is a
1131 PME-only rank), then that task can be offloaded to a GPU. Such a PME
1132 task can run wholly on the GPU, or have its latter stages run only on the CPU.
1134 Limitations are that PME on GPU does not support PME domain decomposition,
1135 so that only one PME task can be offloaded to a single GPU
1136 assigned to a separate PME rank, while NB can be decomposed and offloaded to multiple GPUs.
1138 |Gromacs| version 2019:
1140 No new assignable GPU tasks are available, but any bonded interactions
1141 may run on the same GPU as the short-ranged interactions for a PP task.
1142 This can be influenced with the ``-bonded`` flag.
1144 Performance considerations for GPU tasks
1145 ........................................
1147 #) The performance balance depends on the speed and number of CPU cores you
1148 have vs the speed and number of GPUs you have.
1150 #) With slow/old GPUs and/or fast/modern CPUs with many
1151 cores, it might make more sense to let the CPU do PME calculation,
1152 with the GPUs focused on the calculation of the NB.
1154 #) With fast/modern GPUs and/or slow/old CPUs with few cores,
1155 it generally helps to have the GPU do PME.
1157 #) Offloading bonded work to a GPU will often not improve simulation performance
1158 as efficient CPU-based kernels can complete the bonded computation
1159 before the GPU is done with other offloaded work. Therefore,
1160 `gmx mdrun` will default to no bonded offload when PME is offloaded.
1161 Typical cases where performance can be improvement with bonded offload are:
1162 with significant bonded work (e.g. pure lipid or mostly polymer systems with little solvent),
1163 with very few and/or slow CPU cores per GPU, or when the CPU does
1164 other computation (e.g. PME, free energy).
1166 #) It *is* possible to use multiple GPUs with PME offload
1168 3 MPI ranks use one GPU each for short-range interactions,
1169 while a fourth rank does the PME on its GPU.
1171 #) The only way to know for sure what alternative is best for
1172 your machine is to test and check performance.
1174 .. TODO: we need to be more concrete here, i.e. what machine/software aspects to take into consideration, when will default run mode be using PME-GPU and when will it not, when/how should the user reason about testing different settings than the default.
1176 .. TODO someone who knows about the mixed mode should comment further.
1178 Reducing overheads in GPU accelerated runs
1179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1181 In order for CPU cores and GPU(s) to execute concurrently, tasks are
1182 launched and executed asynchronously on the GPU(s) while the CPU cores
1183 execute non-offloaded force computation (like long-range PME electrostatics).
1184 Asynchronous task launches are handled by GPU device driver and
1185 require CPU involvement. Therefore, the work of scheduling
1186 GPU tasks will incur an overhead that can in some cases significantly
1187 delay or interfere with the CPU execution.
1189 Delays in CPU execution are caused by the latency of launching GPU tasks,
1190 an overhead that can become significant as simulation ns/day increases
1191 (i.e. with shorter wall-time per step).
1192 The overhead is measured by :ref:`gmx mdrun` and reported in the performance
1193 summary section of the log file ("Launch GPU ops" row).
1194 A few percent of runtime spent in this category is normal,
1195 but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed.
1196 In general, a user can do little to avoid such overheads, but there
1197 are a few cases where tweaks can give performance benefits.
1198 In single-rank runs timing of GPU tasks is by default enabled and,
1199 while in most cases its impact is small, in fast runs performance can be affected.
1200 The performance impact will be most significant on NVIDIA GPUs with CUDA,
1201 less on AMD and Intel with OpenCL.
1202 In these cases, when more than a few percent of "Launch GPU ops" time is observed,
1203 it is recommended to turn off timing by setting the ``GMX_DISABLE_GPU_TIMING``
1204 environment variable.
1205 In parallel runs with many ranks sharing a GPU,
1206 launch overheads can also be reduced by starting fewer thread-MPI
1207 or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.
1209 The second type of overhead, interference of the GPU driver with CPU computation,
1210 is caused by the scheduling and coordination of GPU tasks.
1211 A separate GPU driver thread can require CPU resources
1212 which may clash with the concurrently running non-offloaded tasks,
1213 potentially degrading the performance of PME or bonded force computation.
1214 This effect is most pronounced when using AMD GPUs with OpenCL with
1215 older driver releases (e.g. fglrx 12.15).
1216 To minimize the overhead it is recommended to
1217 leave a CPU hardware thread unused when launching :ref:`gmx mdrun`,
1218 especially on CPUs with high core counts and/or HyperThreading enabled.
1219 E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU,
1220 try ``gmx mdrun -ntomp 7 -pin on``.
1221 This will leave free CPU resources for the GPU task scheduling
1222 reducing interference with CPU computation.
1223 Note that assigning fewer resources to :ref:`gmx mdrun` CPU computation
1224 involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead,
1225 in particular without HyperThreading and with few CPU cores.
1227 .. TODO In future patch: any tips not covered above
1229 Running the OpenCL version of mdrun
1230 -----------------------------------
1232 Currently supported hardware architectures are:
1233 - GCN-based AMD GPUs;
1234 - NVIDIA GPUs (with at least OpenCL 1.2 support);
1236 Make sure that you have the latest drivers installed. For AMD GPUs,
1237 the compute-oriented `ROCm <https://rocm.github.io/>`_ stack is recommended;
1238 alternatively, the AMDGPU-PRO stack is also compatible; using the outdated
1239 and unsupported `fglrx` proprietary driver and runtime is not recommended (but
1240 for certain older hardware that may be the only way to obtain support).
1241 In addition Mesa version 17.0 or newer with LLVM 4.0 or newer is also supported.
1242 For NVIDIA GPUs, using the proprietary driver is
1243 required as the open source nouveau driver (available in Mesa) does not
1244 provide the OpenCL support.
1245 For Intel integrated GPUs, the `Neo driver <https://github.com/intel/compute-runtime/releases>`_ is
1247 TODO: add more Intel driver recommendations
1248 The minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
1249 also the :ref:`known limitations <opencl-known-limitations>`.
1251 Devices from the AMD GCN architectures (all series) are compatible
1252 and regularly tested; NVIDIA Kepler and later (compute capability 3.0)
1253 are known to work, but before doing production runs always make sure that the |Gromacs| tests
1254 pass successfully on the hardware.
1256 The OpenCL GPU kernels are compiled at run time. Hence,
1257 building the OpenCL program can take a few seconds, introducing a slight
1258 delay in the :ref:`gmx mdrun` startup. This is not normally a
1259 problem for long production MD, but you might prefer to do some kinds
1260 of work, e.g. that runs very few steps, on just the CPU (e.g. see ``-nb`` above).
1262 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
1263 used to select CUDA devices, or to define a mapping of GPUs to PP
1264 ranks, is used for OpenCL devices.
1266 Some other :ref:`OpenCL management <opencl-management>` environment
1267 variables may be of interest to developers.
1269 .. _opencl-known-limitations:
1271 Known limitations of the OpenCL support
1272 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1274 Limitations in the current OpenCL support of interest to |Gromacs| users:
1276 - Intel integrated GPUs are supported. Intel CPUs and Xeon Phi are not supported.
1277 - Due to blocking behavior of some asynchronous task enqueuing functions
1278 in the NVIDIA OpenCL runtime, with the affected driver versions there is
1279 almost no performance gain when using NVIDIA GPUs.
1280 The issue affects NVIDIA driver versions up to 349 series, but it
1281 known to be fixed 352 and later driver releases.
1282 - On NVIDIA GPUs the OpenCL kernels achieve much lower performance
1283 than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL
1286 Limitations of interest to |Gromacs| developers:
1288 - The current implementation requires a minimum execution with of 16; kernels
1289 compiled for narrower execution width (be it due to hardware requirements or
1290 compiler choice) will not be suitable and will trigger a runtime error.
1292 Performance checklist
1293 ---------------------
1295 There are many different aspects that affect the performance of simulations in
1296 |Gromacs|. Most simulations require a lot of computational resources, therefore
1297 it can be worthwhile to optimize the use of those resources. Several issues
1298 mentioned in the list below could lead to a performance difference of a factor
1299 of 2. So it can be useful go through the checklist.
1301 |Gromacs| configuration
1302 ^^^^^^^^^^^^^^^^^^^^^^^
1304 * Don't use double precision unless you're absolute sure you need it.
1305 * Compile the FFTW library (yourself) with the correct flags on x86 (in most
1306 cases, the correct flags are automatically configured).
1307 * On x86, use gcc or icc as the compiler (not pgi or the Cray compiler).
1308 * On POWER, use gcc instead of IBM's xlc.
1309 * Use a new compiler version, especially for gcc (e.g. from version 5 to 6
1310 the performance of the compiled code improved a lot).
1311 * MPI library: OpenMPI usually has good performance and causes little trouble.
1312 * Make sure your compiler supports OpenMP (some versions of Clang don't).
1313 * If you have GPUs that support either CUDA or OpenCL, use them.
1315 * Configure with ``-DGMX_GPU=ON`` (add ``-DGMX_USE_OPENCL=ON`` for OpenCL).
1316 * For CUDA, use the newest CUDA available for your GPU to take advantage of the
1317 latest performance enhancements.
1318 * Use a recent GPU driver.
1319 * Make sure you use an :ref:`gmx mdrun` with ``GMX_SIMD`` appropriate for the CPU
1320 architecture; the log file will contain a warning note if suboptimal setting is used.
1321 However, prefer ``AVX2` over ``AVX512`` in GPU or highly parallel MPI runs (for more
1322 information see the :ref:`intra-core parallelization information <intra-core-parallelization>`).
1323 * If compiling on a cluster head node, make sure that ``GMX_SIMD``
1324 is appropriate for the compute nodes.
1329 * For an approximately spherical solute, use a rhombic dodecahedron unit cell.
1330 * When using a time-step of 2 fs, use :mdp-value:`constraints=h-bonds`
1331 (and not :mdp-value:`constraints=all-bonds`), since this is faster, especially with GPUs,
1332 and most force fields have been parametrized with only bonds involving
1333 hydrogens constrained.
1334 * You can increase the time-step to 4 or 5 fs when using virtual interaction
1335 sites (``gmx pdb2gmx -vsite h``).
1336 * For massively parallel runs with PME, you might need to try different numbers
1337 of PME ranks (``gmx mdrun -npme ???``) to achieve best performance;
1338 :ref:`gmx tune_pme` can help automate this search.
1339 * For massively parallel runs (also ``gmx mdrun -multidir``), or with a slow
1340 network, global communication can become a bottleneck and you can reduce it
1341 by choosing larger periods for algorithms such as temperature and
1344 Checking and improving performance
1345 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1347 * Look at the end of the ``md.log`` file to see the performance and the cycle
1348 counters and wall-clock time for different parts of the MD calculation. The
1349 PP/PME load ratio is also printed, with a warning when a lot of performance is
1350 lost due to imbalance.
1351 * Adjust the number of PME ranks and/or the cut-off and PME grid-spacing when
1352 there is a large PP/PME imbalance. Note that even with a small reported
1353 imbalance, the automated PME-tuning might have reduced the initial imbalance.
1354 You could still gain performance by changing the mdp parameters or increasing
1355 the number of PME ranks.
1356 * If the neighbor searching takes a lot of time, increase nstlist. If a Verlet
1357 buffer tolerance is used, this is done automatically by :ref:`gmx mdrun`
1358 and the pair-list buffer is increased to keep the energy drift constant.
1360 * If ``Comm. energies`` takes a lot of time (a note will be printed in the log
1361 file), increase nstcalcenergy.
1362 * If all communication takes a lot of time, you might be running on too many
1363 cores, or you could try running combined MPI/OpenMP parallelization with 2
1364 or 4 OpenMP threads per MPI process.