3 Getting good performance from :ref:`mdrun <gmx mdrun>`
4 ======================================================
5 The |Gromacs| build system and the :ref:`gmx mdrun` tool has a lot of built-in
6 and configurable intelligence to detect your hardware and make pretty
7 effective use of that hardware. For a lot of casual and serious use of
8 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
9 most from your hardware to maximize your scientific quality, read on!
11 Hardware background information
12 -------------------------------
13 Modern computer hardware is complex and heterogeneous, so we need to
14 discuss a little bit of background information and set up some
15 definitions. Experienced HPC users can skip this section.
20 A hardware compute unit that actually executes
21 instructions. There is normally more than one core in a
22 processor, often many more.
25 A special kind of memory local to core(s) that is much faster
26 to access than main memory, kind of like the top of a human's
27 desk, compared to their filing cabinet. There are often
28 several layers of caches associated with a core.
31 A group of cores that share some kind of locality, such as a
32 shared cache. This makes it more efficient to spread
33 computational work over cores within a socket than over cores
34 in different sockets. Modern processors often have more than
38 A group of sockets that share coarser-level locality, such as
39 shared access to the same memory without requiring any network
40 hardware. A normal laptop or desktop computer is a node. A
41 node is often the smallest amount of a large compute cluster
42 that a user can request to use.
45 A stream of instructions for a core to execute. There are many
46 different programming abstractions that create and manage
47 spreading computation over multiple threads, such as OpenMP,
48 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
49 hardware can map more than one software thread to a core; on
50 Intel x86 processors this is called "hyper-threading", while
51 the more general concept is often called SMT for
52 "simultaneous multi-threading". IBM Power8 can for instance use
53 up to 8 hardware threads per core.
54 This feature can usually be enabled or disabled either in
55 the hardware bios or through a setting in the Linux operating
56 system. |Gromacs| can typically make use of this, for a moderate
57 free performance boost. In most cases it will be
58 enabled by default e.g. on new x86 processors, but in some cases
59 the system administrators might have disabled it. If that is the
60 case, ask if they can re-enable it for you. If you are not sure
61 if it is enabled, check the output of the CPU information in
62 the log file and compare with CPU specifications you find online.
64 thread affinity (pinning)
65 By default, most operating systems allow software threads to migrate
66 between cores (or hardware threads) to help automatically balance
67 workload. However, the performance of :ref:`gmx mdrun` can deteriorate
68 if this is permitted and will degrade dramatically especially when
69 relying on multi-threading within a rank. To avoid this,
70 :ref:`gmx mdrun` will by default
71 set the affinity of its threads to individual cores/hardware threads,
72 unless the user or software environment has already done so
73 (or not the entire node is used for the run, i.e. there is potential
75 Setting thread affinity is sometimes called thread "pinning".
78 The dominant multi-node parallelization-scheme, which provides
79 a standardized language in which programs can be written that
80 work across more than one node.
83 In MPI, a rank is the smallest grouping of hardware used in
84 the multi-node parallelization scheme. That grouping can be
85 controlled by the user, and might correspond to a core, a
86 socket, a node, or a group of nodes. The best choice varies
87 with the hardware, software and compute task. Sometimes an MPI
88 rank is called an MPI process.
91 A graphics processing unit, which is often faster and more
92 efficient than conventional processors for particular kinds of
93 compute workloads. A GPU is always associated with a
94 particular node, and often a particular socket within that
98 A standardized technique supported by many compilers to share
99 a compute workload over multiple cores. Often combined with
100 MPI to achieve hybrid MPI/OpenMP parallelism.
103 A proprietary parallel computing framework and API developed by NVIDIA
104 that allows targeting their accelerator hardware.
105 |Gromacs| uses CUDA for GPU acceleration support with NVIDIA hardware.
108 An open standard-based parallel computing framework that consists
109 of a C99-based compiler and a programming API for targeting heterogeneous
110 and accelerator hardware. |Gromacs| uses OpenCL for GPU acceleration
111 on AMD devices (both GPUs and APUs) and Intel integrated GPUs; NVIDIA
112 hardware is also supported.
115 A type of CPU instruction by which modern CPU cores can execute large
116 numbers of floating-point instructions in a single cycle.
119 |Gromacs| background information
120 --------------------------------
121 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
122 when choosing how to make good use of the hardware. For details,
123 see the Reference Manual. The most important of these are
125 .. _gmx-domain-decomp:
130 The domain decomposition (DD) algorithm decomposes the
131 (short-ranged) component of the non-bonded interactions into
132 domains that share spatial locality, which permits the use of
133 efficient algorithms. Each domain handles all of the
134 particle-particle (PP) interactions for its members, and is
135 mapped to a single MPI rank. Within a PP rank, OpenMP threads
136 can share the workload, and some work can be offloaded to a
137 GPU. The PP rank also handles any bonded interactions for the
138 members of its domain. A GPU may perform work for more than
139 one PP rank, but it is normally most efficient to use a single
140 PP rank per GPU and for that rank to have thousands of
141 particles. When the work of a PP rank is done on the CPU,
142 :ref:`mdrun <gmx mdrun>` will make extensive use of the SIMD
143 capabilities of the core. There are various
144 :ref:`command-line options <controlling-the-domain-decomposition-algorithm>`
145 to control the behaviour of the DD algorithm.
148 The particle-mesh Ewald (PME) algorithm treats the long-ranged
149 component of the non-bonded interactions (Coulomb and/or
150 Lennard-Jones). Either all, or just a subset of ranks may
151 participate in the work for computing the long-ranged component
152 (often inaccurately called simply the "PME"
153 component). Because the algorithm uses a 3D FFT that requires
154 global communication, its performance gets worse as more ranks
155 participate, which can mean it is fastest to use just a subset
156 of ranks (e.g. one-quarter to one-half of the ranks). If
157 there are separate PME ranks, then the remaining ranks handle
158 the PP work. Otherwise, all ranks do both PP and PME work.
160 Running :ref:`mdrun <gmx mdrun>` within a single node
161 -----------------------------------------------------
163 :ref:`gmx mdrun` can be configured and compiled in several different ways that
164 are efficient to use within a single :term:`node`. The default configuration
165 using a suitable compiler will deploy a multi-level hybrid parallelism
166 that uses CUDA, OpenMP and the threading platform native to the
167 hardware. For programming convenience, in |Gromacs|, those native
168 threads are used to implement on a single node the same MPI scheme as
169 would be used between nodes, but much more efficient; this is called
170 thread-MPI. From a user's perspective, real MPI and thread-MPI look
171 almost the same, and |Gromacs| refers to MPI ranks to mean either kind,
172 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
173 a single node, but runs more slowly than the thread-MPI version.
175 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
176 and do its best to make fairly efficient use of the whole node. The
177 log file, stdout and stderr are used to print diagnostics that
178 inform the user about the choices made and possible consequences.
180 A number of command-line parameters are available to modify the default
184 The total number of threads to use. The default, 0, will start as
185 many threads as available cores. Whether the threads are
186 thread-MPI ranks, and/or OpenMP threads within such ranks depends on
190 The total number of thread-MPI ranks to use. The default, 0,
191 will start one rank per GPU (if present), and otherwise one rank
195 The total number of OpenMP threads per rank to start. The
196 default, 0, will start one thread on each available core.
197 Alternatively, :ref:`mdrun <gmx mdrun>` will honor the appropriate system
198 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
201 The total number of ranks to dedicate to the long-ranged
202 component of PME, if used. The default, -1, will dedicate ranks
203 only if the total number of threads is at least 12, and will use
204 around a quarter of the ranks for the long-ranged component.
207 When using PME with separate PME ranks,
208 the total number of OpenMP threads per separate PME ranks.
209 The default, 0, copies the value from ``-ntomp``.
212 Can be set to "auto," "on" or "off" to control whether
213 :ref:`mdrun <gmx mdrun>` will attempt to set the affinity of threads to cores.
214 Defaults to "auto," which means that if :ref:`mdrun <gmx mdrun>` detects that all the
215 cores on the node are being used for :ref:`mdrun <gmx mdrun>`, then it should behave
216 like "on," and attempt to set the affinities (unless they are
217 already set by something else).
220 If ``-pin on``, specifies the logical core number to
221 which :ref:`mdrun <gmx mdrun>` should pin the first thread. When running more than
222 one instance of :ref:`mdrun <gmx mdrun>` on a node, use this option to to avoid
223 pinning threads from different :ref:`mdrun <gmx mdrun>` instances to the same core.
226 If ``-pin on``, specifies the stride in logical core
227 numbers for the cores to which :ref:`mdrun <gmx mdrun>` should pin its threads. When
228 running more than one instance of :ref:`mdrun <gmx mdrun>` on a node, use this option
229 to to avoid pinning threads from different :ref:`mdrun <gmx mdrun>` instances to the
230 same core. Use the default, 0, to minimize the number of threads
231 per physical core - this lets :ref:`mdrun <gmx mdrun>` manage the hardware-, OS- and
232 configuration-specific details of how to map logical cores to
236 Can be set to "interleave," "pp_pme" or "cartesian."
237 Defaults to "interleave," which means that any separate PME ranks
238 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
239 PME, ... etc. This generally makes the best use of the available
240 hardware. "pp_pme" maps all PP ranks first, then all PME
241 ranks. "cartesian" is a special-purpose mapping generally useful
242 only on special torus networks with accelerated global
243 communication for Cartesian communicators. Has no effect if there
244 are no separate PME ranks.
247 Used to set where to execute the short-range non-bonded interactions.
248 Can be set to "auto", "cpu", "gpu."
249 Defaults to "auto," which uses a compatible GPU if available.
250 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
251 that a compatible GPU is available and will be used.
254 Used to set where to execute the long-range non-bonded interactions.
255 Can be set to "auto", "cpu", "gpu."
256 Defaults to "auto," which uses a compatible GPU if available.
257 Setting "gpu" requires that a compatible GPU is available and will be used.
258 Multiple PME ranks are not supported with PME on GPU, so if a GPU is used
259 for the PME calculation -npme must be set to 1.
262 Used to set where to execute the bonded interactions that are part of the
263 PP workload for a domain.
264 Can be set to "auto", "cpu", "gpu."
265 Defaults to "auto," which uses a compatible CUDA GPU only when one
266 is available, a GPU is handling short-ranged interactions, and the
267 CPU is handling long-ranged interaction work (electrostatic or
268 LJ). The work for the bonded interactions takes place on the same
269 GPU as the short-ranged interactions, and cannot be independently
271 Setting "gpu" requires that a compatible GPU is available and will
275 A string that specifies the ID numbers of the GPUs that
276 are available to be used by ranks on this node. For example,
277 "12" specifies that the GPUs with IDs 1 and 2 (as reported
278 by the GPU runtime) can be used by :ref:`mdrun <gmx mdrun>`. This is useful
279 when sharing a node with other computations, or if a GPU
280 is best used to support a display. Without specifying this
281 parameter, :ref:`mdrun <gmx mdrun>` will utilize all GPUs. When many GPUs are
282 present, a comma may be used to separate the IDs, so
283 "12,13" would make GPUs 12 and 13 available to :ref:`mdrun <gmx mdrun>`.
284 It could be necessary to use different GPUs on different
285 nodes of a simulation, in which case the environment
286 variable ``GMX_GPU_ID`` can be set differently for the ranks
287 on different nodes to achieve that result.
288 In |Gromacs| versions preceding 2018 this parameter used to
289 specify both GPU availability and GPU task assignment.
290 The latter is now done with the ``-gputasks`` parameter.
293 A string that specifies the ID numbers of the GPUs to be
294 used by corresponding GPU tasks on this node. For example,
295 "0011" specifies that the first two GPU tasks will use GPU 0,
296 and the other two use GPU 1. When using this option, the
297 number of ranks must be known to :ref:`mdrun <gmx mdrun>`, as well as where
298 tasks of different types should be run, such as by using
299 ``-nb gpu`` - only the tasks which are set to run on GPUs
300 count for parsing the mapping. See `Assigning tasks to GPUs`_
303 In |Gromacs| versions preceding 2018 only a single type
304 of GPU task ("PP") could be run on any rank. Now that there is some
305 support for running PME on GPUs, the number of GPU tasks
306 (and the number of GPU IDs expected in the ``-gputasks`` string)
307 can actually be 2 for a single-rank simulation. The IDs
308 still have to be the same in this case, as using multiple GPUs
309 per single rank is not yet implemented.
310 The order of GPU tasks per rank in the string is PP first,
311 PME second. The order of ranks with different kinds of GPU tasks
312 is the same by default, but can be influenced with the ``-ddorder``
313 option and gets quite complex when using multiple nodes.
314 Note that the bonded interactions for a PP task may
315 run on the same GPU as the short-ranged work, or on the CPU,
316 which can be controlled with the ``-bonded`` flag.
317 The GPU task assignment (whether manually set, or automated),
318 will be reported in the :ref:`mdrun <gmx mdrun>` output on
319 the first physical node of the simulation. For example:
323 gmx mdrun -gputasks 0001 -nb gpu -pme gpu -npme 1 -ntmpi 4
325 will produce the following output in the log file/terminal:
329 On host tcbl14 2 GPUs user-selected for this run.
330 Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
333 In this case, 3 ranks are set by user to compute PP work
334 on GPU 0, and 1 rank to compute PME on GPU 1.
335 The detailed indexing of the GPUs is also reported in the log file.
337 For more information about GPU tasks, please refer to
338 :ref:`Types of GPU tasks<gmx-gpu-tasks>`.
341 Allows choosing whether to execute the 3D FFT computation on a CPU or GPU.
342 Can be set to "auto", "cpu", "gpu.".
343 When PME is offloaded to a GPU ``-pmefft gpu`` is the default,
344 and the entire PME calculation is executed on the GPU. However,
345 in some cases, e.g. with a relatively slow or older generation GPU
346 combined with fast CPU cores in a run, moving some work off of the GPU
347 back to the CPU by computing FFTs on the CPU can improve performance.
349 Examples for :ref:`mdrun <gmx mdrun>` on one node
350 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
356 Starts :ref:`mdrun <gmx mdrun>` using all the available resources. :ref:`mdrun <gmx mdrun>`
357 will automatically choose a fairly efficient division
358 into thread-MPI ranks, OpenMP threads and assign work
359 to compatible GPUs. Details will vary with hardware
360 and the kind of simulation being run.
366 Starts :ref:`mdrun <gmx mdrun>` using 8 threads, which might be thread-MPI
367 or OpenMP threads depending on hardware and the kind
368 of simulation being run.
372 gmx mdrun -ntmpi 2 -ntomp 4
374 Starts :ref:`mdrun <gmx mdrun>` using eight total threads, with two thread-MPI
375 ranks and four OpenMP threads per rank. You should only use
376 these options when seeking optimal performance, and
377 must take care that the ranks you create can have
378 all of their OpenMP threads run on the same socket.
379 The number of ranks must be a multiple of the number of
380 sockets, and the number of cores per node must be
381 a multiple of the number of threads per rank.
385 gmx mdrun -ntmpi 4 -nb gpu -pme cpu
387 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks. The CPU
388 cores available will be split evenly between the ranks using OpenMP
389 threads. The long-range component of the forces are calculated on
390 CPUs. This may be optimal on hardware where the CPUs are relatively
391 powerful compared to the GPUs. The bonded part of force calculation
392 will automatically be assigned to the GPU, since the long-range
393 component of the forces are calculated on CPU(s).
397 gmx mdrun -ntmpi 1 -nb gpu -pme gpu -bonded gpu
399 Starts :ref:`mdrun <gmx mdrun>` using a single thread-MPI rank that
400 will use all available CPU cores. All interaction types that can run
401 on a GPU will do so. This may be optimal on hardware where the CPUs
402 are extremely weak compared to the GPUs.
406 gmx mdrun -ntmpi 4 -nb gpu -pme cpu -gputasks 0011
408 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks, and maps them
409 to GPUs with IDs 0 and 1. The CPU cores available will be split evenly between
410 the ranks using OpenMP threads, with the first two ranks offloading short-range
411 nonbonded force calculations to GPU 0, and the last two ranks offloading to GPU 1.
412 The long-range component of the forces are calculated on CPUs. This may be optimal
413 on hardware where the CPUs are relatively powerful compared to the GPUs.
417 gmx mdrun -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0001
419 Starts :ref:`mdrun <gmx mdrun>` using four thread-MPI ranks, one of which is
420 dedicated to the long-range PME calculation. The first 3 threads offload their
421 short-range non-bonded calculations to the GPU with ID 0, the 4th (PME) thread
422 offloads its calculations to the GPU with ID 1.
426 gmx mdrun -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0011
428 Similar to the above example, with 3 ranks assigned to calculating short-range
429 non-bonded forces, and one rank assigned to calculate the long-range forces.
430 In this case, 2 of the 3 short-range ranks offload their nonbonded force
431 calculations to GPU 0. The GPU with ID 1 calculates the short-ranged forces of
432 the 3rd short-range rank, as well as the long-range forces of the PME-dedicated
433 rank. Whether this or the above example is optimal will depend on the capabilities
434 of the individual GPUs and the system composition.
440 Starts :ref:`mdrun <gmx mdrun>` using GPUs with IDs 1 and 2 (e.g. because
441 GPU 0 is dedicated to running a display). This requires
442 two thread-MPI ranks, and will split the available
443 CPU cores between them using OpenMP threads.
447 gmx mdrun -nt 6 -pin on -pinoffset 0 -pinstride 1
448 gmx mdrun -nt 6 -pin on -pinoffset 6 -pinstride 1
450 Starts two :ref:`mdrun <gmx mdrun>` processes, each with six total threads
451 arranged so that the processes affect each other as little as possible by
452 being assigned to disjoint sets of physical cores.
453 Threads will have their affinities set to particular
454 logical cores, beginning from the first and 7th logical cores, respectively. The
455 above would work well on an Intel CPU with six physical cores and
456 hyper-threading enabled. Use this kind of setup only
457 if restricting :ref:`mdrun <gmx mdrun>` to a subset of cores to share a
458 node with other processes.
462 mpirun -np 2 gmx_mpi mdrun
464 When using an :ref:`gmx mdrun` compiled with external MPI,
465 this will start two ranks and as many OpenMP threads
466 as the hardware and MPI setup will permit. If the
467 MPI setup is restricted to one node, then the resulting
468 :ref:`gmx mdrun` will be local to that node.
470 Running :ref:`mdrun <gmx mdrun>` on more than one node
471 ------------------------------------------------------
472 This requires configuring |Gromacs| to build with an external MPI
473 library. By default, this :ref:`mdrun <gmx mdrun>` executable is run with
474 :ref:`mdrun_mpi`. All of the considerations for running single-node
475 :ref:`mdrun <gmx mdrun>` still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
476 error, and instead the number of ranks is controlled by the
478 Settings such as ``-npme`` are much more important when
479 using multiple nodes. Configuring the MPI environment to
480 produce one rank per core is generally good until one
481 approaches the strong-scaling limit. At that point, using
482 OpenMP to spread the work of an MPI rank over more than one
483 core is needed to continue to improve absolute performance.
484 The location of the scaling limit depends on the processor,
485 presence of GPUs, network, and simulation algorithm, but
486 it is worth measuring at around ~200 particles/core if you
487 need maximum throughput.
489 There are further command-line parameters that are relevant in these
493 Defaults to "on." If "on," a Verlet-scheme simulation will
494 optimize various aspects of the PME and DD algorithms, shifting
495 load between ranks and/or GPUs to maximize throughput. Some
496 :ref:`mdrun <gmx mdrun>` features are not compatible with this, and these ignore
500 Can be set to "auto," "no," or "yes."
501 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
502 is needed to maximize performance. This is particularly important
503 for molecular systems with heterogeneous particle or interaction
504 density. When a certain threshold for performance loss is
505 exceeded, DLB activates and shifts particles between ranks to improve
506 performance. If available, using ``-bonded gpu`` is expected
507 to improve the ability of DLB to maximize performance.
510 During the simulation :ref:`gmx mdrun` must communicate between all ranks to
511 compute quantities such as kinetic energy. By default, this
512 happens whenever plausible, and is influenced by a lot of
513 :ref:`mdp options. <mdp-general>` The period between communication phases
514 must be a multiple of :mdp:`nstlist`, and defaults to
515 the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
516 ``mdrun -gcom`` sets the number of steps that must elapse between
517 such communication phases, which can improve performance when
518 running on a lot of ranks. Note that this means that _e.g._
519 temperature coupling algorithms will
520 effectively remain at constant energy until the next
521 communication phase. :ref:`gmx mdrun` will always honor the
522 setting of ``mdrun -gcom``, by changing :mdp:`nstcalcenergy`,
523 :mdp:`nstenergy`, :mdp:`nstlog`, :mdp:`nsttcouple` and/or
524 :mdp:`nstpcouple` if necessary.
526 Note that ``-tunepme`` has more effect when there is more than one
527 :term:`node`, because the cost of communication for the PP and PME
528 ranks differs. It still shifts load between PP and PME ranks, but does
529 not change the number of separate PME ranks in use.
531 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
532 if you experience performance variation that could result from this,
533 you may wish to tune PME separately, and run the result with ``mdrun
534 -notunepme -dlb yes``.
536 The :ref:`gmx tune_pme` utility is available to search a wider
537 range of parameter space, including making safe
538 modifications to the :ref:`tpr` file, and varying ``-npme``.
539 It is only aware of the number of ranks created by
540 the MPI environment, and does not explicitly manage
541 any aspect of OpenMP during the optimization.
543 Examples for :ref:`mdrun <gmx mdrun>` on more than one node
544 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
545 The examples and explanations for for single-node :ref:`mdrun <gmx mdrun>` are
546 still relevant, but ``-ntmpi`` is no longer the way
547 to choose the number of MPI ranks.
551 mpirun -np 16 gmx_mpi mdrun
553 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
554 the hardware by the MPI library, e.g. as specified
555 in an MPI hostfile. The available cores will be
556 automatically split among ranks using OpenMP threads,
557 depending on the hardware and any environment settings
558 such as ``OMP_NUM_THREADS``.
562 mpirun -np 16 gmx_mpi mdrun -npme 5
564 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
565 require that 5 of them are dedicated to the PME
570 mpirun -np 11 gmx_mpi mdrun -ntomp 2 -npme 6 -ntomp_pme 1
572 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
573 require that six of them are dedicated to the PME
574 component with one OpenMP thread each. The remaining
575 five do the PP component, with two OpenMP threads
580 mpirun -np 4 gmx_mpi mdrun -ntomp 6 -nb gpu -gputasks 00
582 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
583 four total ranks, each rank with six OpenMP threads,
584 and both ranks on a node sharing GPU with ID 0.
588 mpirun -np 8 gmx_mpi mdrun -ntomp 3 -gputasks 0000
590 Using a same/similar hardware as above,
591 starts :ref:`mdrun_mpi` on a machine with two nodes, using
592 eight total ranks, each rank with three OpenMP threads,
593 and all four ranks on a node sharing GPU with ID 0.
594 This may or may not be faster than the previous setup
595 on the same hardware.
599 mpirun -np 20 gmx_mpi mdrun -ntomp 4 -gputasks 00
601 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
602 across ranks each to one OpenMP thread. This setup is likely to be
603 suitable when there are ten nodes, each with one GPU, and each node
604 has two sockets each of four cores.
608 mpirun -np 10 gmx_mpi mdrun -gpu_id 1
610 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
611 across ranks each to one OpenMP thread. This setup is likely to be
612 suitable when there are ten nodes, each with two GPUs, but another
613 job on each node is using GPU 0. The job scheduler should set the
614 affinity of threads of both jobs to their allocated cores, or the
615 performance of :ref:`mdrun <gmx mdrun>` will suffer greatly.
619 mpirun -np 20 gmx_mpi mdrun -gpu_id 01
621 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
622 to be suitable when there are ten nodes, each with two
623 GPUs, but there is no need to specify ``-gpu_id`` for the
624 normal case where all the GPUs on the node are available
627 .. _controlling-the-domain-decomposition-algorithm:
629 Controlling the domain decomposition algorithm
630 ----------------------------------------------
631 This section lists all the options that affect how the domain
632 decomposition algorithm decomposes the workload to the available
636 Can be used to set the required maximum distance for inter
637 charge-group bonded interactions. Communication for two-body
638 bonded interactions below the non-bonded cut-off distance always
639 comes for free with the non-bonded communication. Particles beyond
640 the non-bonded cut-off are only communicated when they have
641 missing bonded interactions; this means that the extra cost is
642 minor and nearly independent of the value of ``-rdd``. With dynamic
643 load balancing, option ``-rdd`` also sets the lower limit for the
644 domain decomposition cell sizes. By default ``-rdd`` is determined
645 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
646 be a balance between interaction range and communication cost.
649 On by default. When inter charge-group bonded interactions are
650 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
651 error message. For pair interactions and tabulated bonds that do
652 not generate exclusions, this check can be turned off with the
653 option ``-noddcheck``.
656 When constraints are present, option ``-rcon`` influences
657 the cell size limit as well.
658 Particles connected by NC constraints, where NC is the LINCS order
659 plus 1, should not be beyond the smallest cell size. A error
660 message is generated when this happens, and the user should change
661 the decomposition or decrease the LINCS order and increase the
662 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
663 minimum cell size required for P-LINCS in a conservative
664 fashion. For high parallelization, it can be useful to set the
665 distance required for P-LINCS with ``-rcon``.
668 Sets the minimum allowed x, y and/or z scaling of the cells with
669 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
670 scale down by at least this factor. This option is used for the
671 automated spatial decomposition (when not using ``-dd``) as well as
672 for determining the number of grid pulses, which in turn sets the
673 minimum allowed cell size. Under certain circumstances the value
674 of ``-dds`` might need to be adjusted to account for high or low
675 spatial inhomogeneity of the system.
677 Finding out how to run :ref:`mdrun <gmx mdrun>` better
678 ------------------------------------------------------
680 The Wallcycle module is used for runtime performance measurement of :ref:`gmx mdrun`.
681 At the end of the log file of each run, the "Real cycle and time accounting" section
682 provides a table with runtime statistics for different parts of the :ref:`gmx mdrun` code
683 in rows of the table.
684 The table contains colums indicating the number of ranks and threads that
685 executed the respective part of the run, wall-time and cycle
686 count aggregates (across all threads and ranks) averaged over the entire run.
687 The last column also shows what precentage of the total runtime each row represents.
688 Note that the :ref:`gmx mdrun` timer resetting functionalities (`-resethway` and `-resetstep`)
689 reset the performance counters and therefore are useful to avoid startup overhead and
690 performance instability (e.g. due to load balancing) at the beginning of the run.
692 The performance counters are:
694 * Particle-particle during Particle mesh Ewald
695 * Domain decomposition
696 * Domain decomposition communication load
697 * Domain decomposition communication bounds
698 * Virtual site constraints
699 * Send X to Particle mesh Ewald
701 * Launch GPU operations
702 * Communication of coordinates
704 * Waiting + Communication of force
705 * Particle mesh Ewald
710 * PME 3D-FFT Communication
711 * PME solve Lennard-Jones
714 * PME wait for particle-particle
715 * Wait + Receive PME force
718 * Wait PME GPU spread
719 * Wait PME GPU gather
720 * Reduce PME GPU Force
721 * Non-bonded position/force buffer operations
722 * Virtual site spread
724 * AWH (accelerated weight histogram method)
728 * Communication of energies
730 * Add rotational forces
734 As performance data is collected for every run, they are essential to assessing
735 and tuning the performance of :ref:`gmx mdrun` performance. Therefore, they benefit
736 both code developers as well as users of the program.
737 The counters are an average of the time/cycles different parts of the simulation take,
738 hence can not directly reveal fluctuations during a single run (although comparisons across
739 multiple runs are still very useful).
741 Counters will appear in an MD log file only if the related parts of the code were
742 executed during the :ref:`gmx mdrun` run. There is also a special counter called "Rest" which
743 indicates the amount of time not accounted for by any of the counters above. Therefore,
744 a significant amount "Rest" time (more than a few percent) will often be an indication of
745 parallelization inefficiency (e.g. serial code) and it is recommended to be reported to the
748 An additional set of subcounters can offer more fine-grained inspection of performance. They are:
750 * Domain decomposition redistribution
751 * DD neighbor search grid + sort
752 * DD setup communication
754 * DD make constraints
756 * Neighbor search grid local
759 * NS search non-local
763 * Listed buffer operations
766 * Launch non-bonded GPU tasks
767 * Launch PME GPU tasks
768 * Ewald force correction
769 * Non-bonded position buffer operations
770 * Non-bonded force buffer operations
772 Subcounters are geared toward developers and have to be enabled during compilation. See
773 :doc:`/dev-manual/build-system` for more information.
775 TODO In future patch:
776 - red flags in log files, how to interpret wallcycle output
777 - hints to devs how to extend wallcycles
779 .. _gmx-mdrun-on-gpu:
781 Running :ref:`mdrun <gmx mdrun>` with GPUs
782 ------------------------------------------
789 To better understand the later sections on different GPU use cases for
790 calculation of :ref:`short range<gmx-gpu-pp>` and :ref:`PME <gmx-gpu-pme>`,
791 we first introduce the concept of different GPU tasks. When thinking about
792 running a simulation, several different kinds of interactions between the atoms
793 have to be calculated (for more information please refer to the reference manual).
794 The calculation can thus be split into several distinct parts that are largely independent
795 of each other (hence can be calculated in any order, e.g. sequentially or concurrently),
796 with the information from each of them combined at the end of
797 time step to obtain the final forces on each atom and to propagate the system
798 to the next time point. For a better understanding also please see the section
799 on :ref:`domain decomposition <gmx-domain-decomp>`.
801 Of all calculations required for an MD step,
802 GROMACS aims to optimize performance bottom-up for each step
803 from the lowest level (SIMD unit, cores, sockets, accelerators, etc.).
804 Therefore many of the individual computation units are
805 highly tuned for the lowest level of hardware parallelism: the SIMD units.
806 Additionally, with GPU accelerators used as *co-processors*, some of the work
807 can be *offloaded*, that is calculated simultaneously/concurrently with the CPU
808 on the accelerator device, with the result being communicated to the CPU.
809 Right now, |Gromacs| supports GPU accelerator offload of two tasks:
810 the short-range :ref:`nonbonded interactions in real space <gmx-gpu-pp>`,
811 and :ref:`PME <gmx-gpu-pme>`.
813 **Please note that the solving of PME on GPU is still only the initial
814 version supporting this behaviour, and comes with a set of limitations
815 outlined further below.**
817 Right now, we generally support short-range nonbonded offload with and
818 without dynamic pruning on a wide range of GPU accelerators
819 (both NVIDIA and AMD). This is compatible with the grand majority of
820 the features and parallelization modes and can be used to scale to large machines.
822 Simultaneously offloading both short-range nonbonded and long-range
823 PME work to GPU accelerators is a new feature that that has some
824 restrictions in terms of feature and parallelization
825 compatibility (please see the :ref:`section below <gmx-pme-gpu-limitations>`).
829 GPU computation of short range nonbonded interactions
830 .....................................................
832 .. TODO make this more elaborate and include figures
834 Using the GPU for the short-ranged nonbonded interactions provides
835 the majority of the available speed-up compared to run using only the CPU.
836 Here, the GPU acts as an accelerator that can effectively parallelize
837 this problem and thus reduce the calculation time.
841 GPU accelerated calculation of PME
842 ..................................
844 .. TODO again, extend this and add some actual useful information concerning performance etc...
846 |Gromacs| now allows the offloading of the PME calculation
847 to the GPU, to further reduce the load on the CPU and improve usage overlap between
848 CPU and GPU. Here, the solving of PME will be performed in addition to the calculation
849 of the short range interactions on the same GPU as the short range interactions.
851 .. _gmx-pme-gpu-limitations:
856 **Please note again the limitations outlined below!**
858 - Only compilation with CUDA is supported.
860 - Only a PME order of 4 is supported on GPUs.
862 - PME will run on a GPU only when exactly one rank has a
863 PME task, ie. decompositions with multiple ranks doing PME are not supported.
865 - Only single precision is supported.
867 - Free energy calculations where charges are perturbed are not supported,
868 because only single PME grids can be calculated.
870 - Only dynamical integrators are supported (ie. leap-frog, Velocity Verlet,
873 - LJ PME is not supported on GPUs.
875 GPU accelerated calculation of bonded interactions (CUDA only)
876 ..............................................................
878 .. TODO again, extend this and add some actual useful information concerning performance etc...
880 |Gromacs| now allows the offloading of the bonded part of the PP
881 workload to a CUDA-compatible GPU. This is treated as part of the PP
882 work, and requires that the short-ranged non-bonded task also runs on
883 a GPU. It is an advantage usually only when the CPU is relatively weak
884 compared with the GPU, perhaps because its workload is too large for
885 the available cores. This would likely be the case for free-energy
888 Assigning tasks to GPUs
889 .......................
891 Depending on which tasks should be performed on which hardware, different kinds of
892 calculations can be combined on the same or different GPUs, according to the information
893 provided for running :ref:`mdrun <gmx mdrun>`.
895 It is possible to assign the calculation of the different computational tasks to the same GPU, meaning
896 that they will share the computational resources on the same device, or to different processing units
897 that will each perform one task each.
899 One overview over the possible task assignments is given below:
901 |Gromacs| version 2018:
903 Two different types of assignable GPU accelerated tasks are available, NB and PME.
904 Each PP rank has a NB task that can be offloaded to a GPU.
905 If there is only one rank with a PME task (including if that rank is a
906 PME-only rank), then that task can be offloaded to a GPU. Such a PME
907 task can run wholly on the GPU, or have its latter stages run only on the CPU.
909 Limitations are that PME on GPU does not support PME domain decomposition,
910 so that only one PME task can be offloaded to a single GPU
911 assigned to a separate PME rank, while NB can be decomposed and offloaded to multiple GPUs.
913 |Gromacs| version 2019:
915 No new assignable GPU tasks are available, but any bonded interactions
916 may run on the same GPU as the short-ranged interactions for a PP task.
917 This can be influenced with the ``-bonded`` flag.
919 Performance considerations for GPU tasks
920 ........................................
922 #) The performance balance depends on the speed and number of CPU cores you
923 have vs the speed and number of GPUs you have.
925 #) With slow/old GPUs and/or fast/modern CPUs with many
926 cores, it might make more sense to let the CPU do PME calculation,
927 with the GPUs focused on the calculation of the NB.
929 #) With fast/modern GPUs and/or slow/old CPUs with few cores,
930 it generally helps to have the GPU do PME. With very few/weak
931 cores, it can help to have the GPU do bonded interactions also.
933 #) It *is* possible to use multiple GPUs with PME offload
935 3 MPI ranks use one GPU each for short-range interactions,
936 while a fourth rank does the PME on its GPU.
938 #) The only way to know for sure what alternative is best for
939 your machine is to test and check performance.
941 .. TODO: we need to be more concrete here, i.e. what machine/software aspects to take into consideration, when will default run mode be using PME-GPU and when will it not, when/how should the user reason about testing different settings than the default.
943 .. TODO someone who knows about the mixed mode should comment further.
945 Reducing overheads in GPU accelerated runs
946 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
948 In order for CPU cores and GPU(s) to execute concurrently, tasks are
949 launched and executed asynchronously on the GPU(s) while the CPU cores
950 execute non-offloaded force computation (like long-range PME electrostatics).
951 Asynchronous task launches are handled by GPU device driver and
952 require CPU involvement. Therefore, the work of scheduling
953 GPU tasks will incur an overhead that can in some cases significantly
954 delay or interfere with the CPU execution.
956 Delays in CPU execution are caused by the latency of launching GPU tasks,
957 an overhead that can become significant as simulation ns/day increases
958 (i.e. with shorter wall-time per step).
959 The overhead is measured by :ref:`gmx mdrun` and reported in the performance
960 summary section of the log file ("Launch GPU ops" row).
961 A few percent of runtime spent in this category is normal,
962 but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed.
963 In general, a user can do little to avoid such overheads, but there
964 are a few cases where tweaks can give performance benefits.
965 In single-rank runs timing of GPU tasks is by default enabled and,
966 while in most cases its impact is small, in fast runs performance can be affected.
967 The performance impact will be most significant on NVIDIA GPUs with CUDA,
968 less on AMD and Intel with OpenCL.
969 In these cases, when more than a few percent of "Launch GPU ops" time is observed,
970 it is recommended to turn off timing by setting the ``GMX_DISABLE_GPU_TIMING``
971 environment variable.
972 In parallel runs with many ranks sharing a GPU,
973 launch overheads can also be reduced by starting fewer thread-MPI
974 or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.
976 The second type of overhead, interference of the GPU driver with CPU computation,
977 is caused by the scheduling and coordination of GPU tasks.
978 A separate GPU driver thread can require CPU resources
979 which may clash with the concurrently running non-offloaded tasks,
980 potentially degrading the performance of PME or bonded force computation.
981 This effect is most pronounced when using AMD GPUs with OpenCL with
982 older driver releases (e.g. fglrx 12.15).
983 To minimize the overhead it is recommended to
984 leave a CPU hardware thread unused when launching :ref:`gmx mdrun`,
985 especially on CPUs with high core counts and/or HyperThreading enabled.
986 E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU,
987 try ``gmx mdrun -ntomp 7 -pin on``.
988 This will leave free CPU resources for the GPU task scheduling
989 reducing interference with CPU computation.
990 Note that assigning fewer resources to :ref:`gmx mdrun` CPU computation
991 involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead,
992 in particular without HyperThreading and with few CPU cores.
994 TODO In future patch: any tips not covered above
996 Running the OpenCL version of mdrun
997 -----------------------------------
999 Currently supported hardware architectures are:
1000 - GCN-based AMD GPUs;
1001 - NVIDIA GPUs (with at least OpenCL 1.2 support);
1003 Make sure that you have the latest drivers installed. For AMD GPUs,
1004 the compute-oriented `ROCm <https://rocm.github.io/>`_ stack is recommended;
1005 alternatively, the AMDGPU-PRO stack is also compatible; using the outdated
1006 and unsupported `fglrx` proprietary driver and runtime is not recommended (but
1007 for certain older hardware that may be the only way to obtain support).
1008 In addition Mesa version 17.0 or newer with LLVM 4.0 or newer is also supported.
1009 For NVIDIA GPUs, using the proprietary driver is
1010 required as the open source nouveau driver (available in Mesa) does not
1011 provide the OpenCL support.
1012 For Intel integrated GPUs, the `Neo driver <https://github.com/intel/compute-runtime/releases>`_ is
1014 TODO: add more Intel driver recommendations
1015 The minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
1016 also the :ref:`known limitations <opencl-known-limitations>`.
1018 Devices from the AMD GCN architectures (all series) are compatible
1019 and regularly tested; NVIDIA Fermi and later (compute capability 2.0)
1020 are known to work, but before doing production runs always make sure that the |Gromacs| tests
1021 pass successfully on the hardware.
1023 The OpenCL GPU kernels are compiled at run time. Hence,
1024 building the OpenCL program can take a few seconds, introducing a slight
1025 delay in the :ref:`gmx mdrun` startup. This is not normally a
1026 problem for long production MD, but you might prefer to do some kinds
1027 of work, e.g. that runs very few steps, on just the CPU (e.g. see ``-nb`` above).
1029 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
1030 used to select CUDA devices, or to define a mapping of GPUs to PP
1031 ranks, is used for OpenCL devices.
1033 Some other :ref:`OpenCL management <opencl-management>` environment
1034 variables may be of interest to developers.
1036 .. _opencl-known-limitations:
1038 Known limitations of the OpenCL support
1039 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1041 Limitations in the current OpenCL support of interest to |Gromacs| users:
1043 - Intel integrated GPUs are supported. Intel CPUs and Xeon Phi are not supported.
1044 - Due to blocking behavior of some asynchronous task enqueuing functions
1045 in the NVIDIA OpenCL runtime, with the affected driver versions there is
1046 almost no performance gain when using NVIDIA GPUs.
1047 The issue affects NVIDIA driver versions up to 349 series, but it
1048 known to be fixed 352 and later driver releases.
1049 - On NVIDIA GPUs the OpenCL kernels achieve much lower performance
1050 than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL
1052 - PME is currently only supported on AMD devices, because of known
1053 issues with devices from other vendors
1055 Limitations of interest to |Gromacs| developers:
1057 - The current implementation is not compatible with OpenCL devices that are
1058 not using warp/wavefronts or for which the warp/wavefront size is not a
1061 Performance checklist
1062 ---------------------
1064 There are many different aspects that affect the performance of simulations in
1065 |Gromacs|. Most simulations require a lot of computational resources, therefore
1066 it can be worthwhile to optimize the use of those resources. Several issues
1067 mentioned in the list below could lead to a performance difference of a factor
1068 of 2. So it can be useful go through the checklist.
1070 |Gromacs| configuration
1071 ^^^^^^^^^^^^^^^^^^^^^^^
1073 * Don't use double precision unless you're absolute sure you need it.
1074 * Compile the FFTW library (yourself) with the correct flags on x86 (in most
1075 cases, the correct flags are automatically configured).
1076 * On x86, use gcc or icc as the compiler (not pgi or the Cray compiler).
1077 * On POWER, use gcc instead of IBM's xlc.
1078 * Use a new compiler version, especially for gcc (e.g. from version 5 to 6
1079 the performance of the compiled code improved a lot).
1080 * MPI library: OpenMPI usually has good performance and causes little trouble.
1081 * Make sure your compiler supports OpenMP (some versions of Clang don't).
1082 * If you have GPUs that support either CUDA or OpenCL, use them.
1084 * Configure with ``-DGMX_GPU=ON`` (add ``-DGMX_USE_OPENCL=ON`` for OpenCL).
1085 * For CUDA, use the newest CUDA availabe for your GPU to take advantage of the
1086 latest performance enhancements.
1087 * Use a recent GPU driver.
1088 * If compiling on a cluster head node, make sure that ``GMX_SIMD``
1089 is appropriate for the compute nodes.
1094 * For an approximately spherical solute, use a rhombic dodecahedron unit cell.
1095 * When using a time-step of 2 fs, use :mdp-value:`constraints=h-bonds`
1096 (and not :mdp-value:`constraints=all-bonds`), since this is faster, especially with GPUs,
1097 and most force fields have been parametrized with only bonds involving
1098 hydrogens constrained.
1099 * You can increase the time-step to 4 or 5 fs when using virtual interaction
1100 sites (``gmx pdb2gmx -vsite h``).
1101 * For massively parallel runs with PME, you might need to try different numbers
1102 of PME ranks (``gmx mdrun -npme ???``) to achieve best performance;
1103 :ref:`gmx tune_pme` can help automate this search.
1104 * For massively parallel runs (also ``gmx mdrun -multidir``), or with a slow
1105 network, global communication can become a bottleneck and you can reduce it
1106 with ``gmx mdrun -gcom`` (note that this does affect the frequency of
1107 temperature and pressure coupling).
1109 Checking and improving performance
1110 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1112 * Look at the end of the ``md.log`` file to see the performance and the cycle
1113 counters and wall-clock time for different parts of the MD calculation. The
1114 PP/PME load ratio is also printed, with a warning when a lot of performance is
1115 lost due to imbalance.
1116 * Adjust the number of PME ranks and/or the cut-off and PME grid-spacing when
1117 there is a large PP/PME imbalance. Note that even with a small reported
1118 imbalance, the automated PME-tuning might have reduced the initial imbalance.
1119 You could still gain performance by changing the mdp parameters or increasing
1120 the number of PME ranks.
1121 * If the neighbor searching takes a lot of time, increase nstlist (with the
1122 Verlet cut-off scheme, this automatically adjusts the size of the neighbour
1123 list to do more non-bonded computation to keep energy drift constant).
1125 * If ``Comm. energies`` takes a lot of time (a note will be printed in the log
1126 file), increase nstcalcenergy or use ``mdrun -gcom``.
1127 * If all communication takes a lot of time, you might be running on too many
1128 cores, or you could try running combined MPI/OpenMP parallelization with 2
1129 or 4 OpenMP threads per MPI process.