1 Getting good performance from mdrun
2 ===================================
3 The GROMACS build system and the :ref:`gmx mdrun` tool has a lot of built-in
4 and configurable intelligence to detect your hardware and make pretty
5 effective use of that hardware. For a lot of casual and serious use of
6 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
7 most from your hardware to maximise your scientific quality, read on!
9 Hardware background information
10 -------------------------------
11 Modern computer hardware is complex and heterogeneous, so we need to
12 discuss a little bit of background information and set up some
13 definitions. Experienced HPC users can skip this section.
18 A hardware compute unit that actually executes
19 instructions. There is normally more than one core in a
20 processor, often many more.
23 A special kind of memory local to core(s) that is much faster
24 to access than main memory, kind of like the top of a human's
25 desk, compared to their filing cabinet. There are often
26 several layers of caches associated with a core.
29 A group of cores that share some kind of locality, such as a
30 shared cache. This makes it more efficient to spread
31 computational work over cores within a socket than over cores
32 in different sockets. Modern processors often have more than
36 A group of sockets that share coarser-level locality, such as
37 shared access to the same memory without requiring any network
38 hardware. A normal laptop or desktop computer is a node. A
39 node is often the smallest amount of a large compute cluster
40 that a user can request to use.
43 A stream of instructions for a core to execute. There are many
44 different programming abstractions that create and manage
45 spreading computation over multiple threads, such as OpenMP,
46 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
47 hardware can map more than one software thread to a core; on
48 Intel x86 processors this is called "hyper-threading."
49 Normally, :ref:`gmx mdrun` will not benefit from such mapping.
52 On some kinds of hardware, software threads can migrate
53 between cores to help automatically balance
54 workload. Normally, the performance of :ref:`gmx mdrun` will degrade
55 dramatically if this is permitted, so :ref:`gmx mdrun` will by default
56 set the affinity of its threads to their cores, unless the
57 user or software environment has already done so. Setting
58 thread affinity is sometimes called "pinning" threads to
62 The dominant multi-node parallelization-scheme, which provides
63 a standardized language in which programs can be written that
64 work across more than one node.
67 In MPI, a rank is the smallest grouping of hardware used in
68 the multi-node parallelization scheme. That grouping can be
69 controlled by the user, and might correspond to a core, a
70 socket, a node, or a group of nodes. The best choice varies
71 with the hardware, software and compute task. Sometimes an MPI
72 rank is called an MPI process.
75 A graphics processing unit, which is often faster and more
76 efficient than conventional processors for particular kinds of
77 compute workloads. A GPU is always associated with a
78 particular node, and often a particular socket within that
82 A standardized technique supported by many compilers to share
83 a compute workload over multiple cores. Often combined with
84 MPI to achieve hybrid MPI/OpenMP parallelism.
87 A programming-language extension developed by Nvidia
88 for use in writing code for their GPUs.
91 Modern CPU cores have instructions that can execute large
92 numbers of floating-point instructions in a single cycle.
95 GROMACS background information
96 ------------------------------
97 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
98 when choosing how to make good use of the hardware. For details,
99 see the Reference Manual. The most important of these are
104 The domain decomposition (DD) algorithm decomposes the
105 (short-ranged) component of the non-bonded interactions into
106 domains that share spatial locality, which permits efficient
107 code to be written. Each domain handles all of the
108 particle-particle (PP) interactions for its members, and is
109 mapped to a single rank. Within a PP rank, OpenMP threads can
110 share the workload, or the work can be off-loaded to a
111 GPU. The PP rank also handles any bonded interactions for the
112 members of its domain. A GPU may perform work for more than
113 one PP rank, but it is normally most efficient to use a single
114 PP rank per GPU and for that rank to have thousands of
115 particles. When the work of a PP rank is done on the CPU, mdrun
116 will make extensive use of the SIMD capabilities of the
117 core. There are various `command-line options
118 <controlling-the-domain-decomposition-algorithm` to control
119 the behaviour of the DD algorithm.
122 The particle-mesh Ewald (PME) algorithm treats the long-ranged
123 components of the non-bonded interactions (Coulomb and/or
124 Lennard-Jones). Either all, or just a subset of ranks may
125 participate in the work for computing long-ranged component
126 (often inaccurately called simple the "PME"
127 component). Because the algorithm uses a 3D FFT that requires
128 global communication, its performance gets worse as more ranks
129 participate, which can mean it is fastest to use just a subset
130 of ranks (e.g. one-quarter to one-half of the ranks). If
131 there are separate PME ranks, then the remaining ranks handle
132 the PP work. Otherwise, all ranks do both PP and PME work.
134 Running mdrun within a single node
135 ----------------------------------
137 :ref:`gmx mdrun` can be configured and compiled in several different ways that
138 are efficient to use within a single :term:`node`. The default configuration
139 using a suitable compiler will deploy a multi-level hybrid parallelism
140 that uses CUDA, OpenMP and the threading platform native to the
141 hardware. For programming convenience, in GROMACS, those native
142 threads are used to implement on a single node the same MPI scheme as
143 would be used between nodes, but much more efficient; this is called
144 thread-MPI. From a user's perspective, real MPI and thread-MPI look
145 almost the same, and GROMACS refers to MPI ranks to mean either kind,
146 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
147 a single node, but runs more slowly than the thread-MPI version.
149 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
150 and do its best to make fairly efficient use of the whole node. The
151 log file, stdout and stderr are used to print diagnostics that
152 inform the user about the choices made and possible consequences.
154 A number of command-line parameters are available to vary the default
158 The total number of threads to use. The default, 0, will start as
159 many threads as available cores. Whether the threads are
160 thread-MPI ranks, or OpenMP threads within such ranks depends on
164 The total number of thread-MPI ranks to use. The default, 0,
165 will start one rank per GPU (if present), and otherwise one rank
169 The total number of OpenMP threads per rank to start. The
170 default, 0, will start one thread on each available core.
171 Alternatively, mdrun will honour the appropriate system
172 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
175 The total number of ranks to dedicate to the long-ranged
176 component of PME, if used. The default, -1, will dedicate ranks
177 only if the total number of threads is at least 12, and will use
178 around one-third of the ranks for the long-ranged component.
181 When using PME with separate PME ranks,
182 the total number of OpenMP threads per separate PME ranks.
183 The default, 0, copies the value from ``-ntomp``.
186 A string that specifies the ID numbers of the GPUs to be
187 used by corresponding PP ranks on this node. For example,
188 "0011" specifies that the lowest two PP ranks use GPU 0,
189 and the other two use GPU 1.
192 Can be set to "auto," "on" or "off" to control whether
193 mdrun will attempt to set the affinity of threads to cores.
194 Defaults to "auto," which means that if mdrun detects that all the
195 cores on the node are being used for mdrun, then it should behave
196 like "on," and attempt to set the affinities (unless they are
197 already set by something else).
200 If ``-pin on``, specifies the logical core number to
201 which mdrun should pin the first thread. When running more than
202 one instance of mdrun on a node, use this option to to avoid
203 pinning threads from different mdrun instances to the same core.
206 If ``-pin on``, specifies the stride in logical core
207 numbers for the cores to which mdrun should pin its threads. When
208 running more than one instance of mdrun on a node, use this option
209 to to avoid pinning threads from different mdrun instances to the
210 same core. Use the default, 0, to minimize the number of threads
211 per physical core - this lets mdrun manage the hardware-, OS- and
212 configuration-specific details of how to map logical cores to
216 Can be set to "interleave," "pp_pme" or "cartesian."
217 Defaults to "interleave," which means that any separate PME ranks
218 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
219 PME, ... etc. This generally makes the best use of the available
220 hardware. "pp_pme" maps all PP ranks first, then all PME
221 ranks. "cartesian" is a special-purpose mapping generally useful
222 only on special torus networks with accelerated global
223 communication for Cartesian communicators. Has no effect if there
224 are no separate PME ranks.
227 Can be set to "auto", "cpu", "gpu", "cpu_gpu."
228 Defaults to "auto," which uses a compatible GPU if available.
229 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
230 that a compatible GPU be available and will be used. Setting
231 "cpu_gpu" permits the CPU to execute a GPU-like code path, which
232 will run slowly on the CPU and should only be used for debugging.
234 Examples for mdrun on one node
235 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 Starts mdrun using all the available resources. mdrun
241 will automatically choose a fairly efficient division
242 into thread-MPI ranks, OpenMP threads and assign work
243 to compatible GPUs. Details will vary with hardware
244 and the kind of simulation being run.
249 Starts mdrun using 8 threads, which might be thread-MPI
250 or OpenMP threads depending on hardware and the kind
251 of simulation being run.
254 mdrun -ntmpi 2 -ntomp 4
256 Starts mdrun using eight total threads, with four thread-MPI
257 ranks and two OpenMP threads per core. You should only use
258 these options when seeking optimal performance, and
259 must take care that the ranks you create can have
260 all of their OpenMP threads run on the same socket.
261 The number of ranks must be a multiple of the number of
262 sockets, and the number of cores per node must be
263 a multiple of the number of threads per rank.
268 Starts mdrun using GPUs with IDs 1 and 2 (e.g. because
269 GPU 0 is dedicated to running a display). This requires
270 two thread-MPI ranks, and will split the available
271 CPU cores between them using OpenMP threads.
274 mdrun -ntmpi 4 -gpu_id "1122"
276 Starts mdrun using four thread-MPI ranks, and maps them
277 to GPUs with IDs 1 and 2. The CPU cores available will
278 be split evenly between the ranks using OpenMP threads.
281 mdrun -nt 6 -pin on -pinoffset 0
282 mdrun -nt 6 -pin on -pinoffset 3
284 Starts two mdrun processes, each with six total threads.
285 Threads will have their affinities set to particular
286 logical cores, beginning from the logical core
287 with rank 0 or 3, respectively. The above would work
288 well on an Intel CPU with six physical cores and
289 hyper-threading enabled. Use this kind of setup only
290 if restricting mdrun to a subset of cores to share a
291 node with other processes.
296 When using an :ref:`gmx mdrun` compiled with external MPI,
297 this will start two ranks and as many OpenMP threads
298 as the hardware and MPI setup will permit. If the
299 MPI setup is restricted to one node, then the resulting
300 :ref:`gmx mdrun` will be local to that node.
302 Running mdrun on more than one node
303 -----------------------------------
304 This requires configuring GROMACS to build with an external MPI
305 library. By default, this mdrun executable will be named
306 :ref:`mdrun_mpi`. All of the considerations for running single-node
307 mdrun still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
308 error, and instead the number of ranks is controlled by the
310 Settings such as ``-npme`` are much more important when
311 using multiple nodes. Configuring the MPI environment to
312 produce one rank per core is generally good until one
313 approaches the strong-scaling limit. At that point, using
314 OpenMP to spread the work of an MPI rank over more than one
315 core is needed to continue to improve absolute performance.
316 The location of the scaling limit depends on the processor,
317 presence of GPUs, network, and simulation algorithm, but
318 it is worth measuring at around ~200 particles/core if you
319 need maximum throughput.
321 There are further command-line parameters that are relevant in these
325 Defaults to "on." If "on," will optimize various aspects of the
326 PME and DD algorithms, shifting load between ranks and/or GPUs to
330 Can be set to "auto," "no," or "yes."
331 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
332 is needed to maximize performance. This is particularly important
333 for molecular systems with heterogeneous particle or interaction
334 density. When a certain threshold for performance loss is
335 exceeded, DLB activates and shifts particles between ranks to improve
339 During the simulation :ref:`gmx mdrun` must communicate between all ranks to
340 compute quantities such as kinetic energy. By default, this
341 happens whenever plausible, and is influenced by a lot of [.mdp
342 options](#mdp-options). The period between communication phases
343 must be a multiple of :mdp:`nstlist`, and defaults to
344 the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
345 ``mdrun -gcom`` sets the number of steps that must elapse between
346 such communication phases, which can improve performance when
347 running on a lot of nodes. Note that this means that _e.g._
348 temperature coupling algorithms will
349 effectively remain at constant energy until the next global
352 Note that ``-tunepme`` has more effect when there is more than one
353 :term:`node`, because the cost of communication for the PP and PME
354 ranks differs. It still shifts load between PP and PME ranks, but does
355 not change the number of separate PME ranks in use.
357 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
358 if you experience performance variation that could result from this,
359 you may wish to tune PME separately, and run the result with ``mdrun
360 -notunepme -dlb yes``.
362 The :ref:`gmx tune_pme` utility is available to search a wider
363 range of parameter space, including making safe
364 modifications to the :ref:`tpr` file, and varying ``-npme``.
365 It is only aware of the number of ranks created by
366 the MPI environment, and does not explicitly manage
367 any aspect of OpenMP during the optimization.
369 Examples for mdrun on more than one node
370 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
371 The examples and explanations for for single-node mdrun are
372 still relevant, but ``-nt`` is no longer the way
373 to choose the number of MPI ranks.
377 mpirun -np 16 mdrun_mpi
379 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
380 the hardware by the MPI library, e.g. as specified
381 in an MPI hostfile. The available cores will be
382 automatically split among ranks using OpenMP threads,
383 depending on the hardware and any environment settings
384 such as ``OMP_NUM_THREADS``.
388 mpirun -np 16 mdrun_mpi -npme 5
390 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
391 require that 5 of them are dedicated to the PME
396 mpirun -np 11 mdrun_mpi -ntomp 2 -npme 6 -ntomp_pme 1
398 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
399 require that six of them are dedicated to the PME
400 component with one OpenMP thread each. The remaining
401 five do the PP component, with two OpenMP threads
405 mpirun -np 4 mdrun -ntomp 6 -gpu_id 00
407 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
408 four total ranks, each rank with six OpenMP threads,
409 and both ranks on a node sharing GPU with ID 0.
412 mpirun -np 8 mdrun -ntomp 3 -gpu_id 0000
414 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
415 eight total ranks, each rank with three OpenMP threads,
416 and all four ranks on a node sharing GPU with ID 0.
417 This may or may not be faster than the previous setup
418 on the same hardware.
421 mpirun -np 20 mdrun_mpi -ntomp 4 -gpu_id 0
423 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
424 across ranks each to one OpenMP thread. This setup is likely to be
425 suitable when there are ten nodes, each with one GPU, and each node
429 mpirun -np 20 mdrun_mpi -gpu_id 00
431 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
432 across ranks each to one OpenMP thread. This setup is likely to be
433 suitable when there are ten nodes, each with one GPU, and each node
437 mpirun -np 20 mdrun_mpi -gpu_id 01
439 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
440 to be suitable when there are ten nodes, each with two
444 mpirun -np 40 mdrun_mpi -gpu_id 0011
446 Starts :ref:`mdrun_mpi` with 40 ranks. This setup is likely
447 to be suitable when there are ten nodes, each with two
448 GPUs, and OpenMP performs poorly on the hardware.
450 Controlling the domain decomposition algorithm
451 ----------------------------------------------
452 This section lists all the options that affect how the domain
453 decomposition algorithm decomposes the workload to the available
457 Can be used to set the required maximum distance for inter
458 charge-group bonded interactions. Communication for two-body
459 bonded interactions below the non-bonded cut-off distance always
460 comes for free with the non-bonded communication. Particles beyond
461 the non-bonded cut-off are only communicated when they have
462 missing bonded interactions; this means that the extra cost is
463 minor and nearly indepedent of the value of ``-rdd``. With dynamic
464 load balancing, option ``-rdd`` also sets the lower limit for the
465 domain decomposition cell sizes. By default ``-rdd`` is determined
466 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
467 be a balance between interaction range and communication cost.
470 On by default. When inter charge-group bonded interactions are
471 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
472 error message. For pair interactions and tabulated bonds that do
473 not generate exclusions, this check can be turned off with the
474 option ``-noddcheck``.
477 When constraints are present, option ``-rcon`` influences
478 the cell size limit as well.
479 Particles connected by NC constraints, where NC is the LINCS order
480 plus 1, should not be beyond the smallest cell size. A error
481 message is generated when this happens, and the user should change
482 the decomposition or decrease the LINCS order and increase the
483 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
484 minimum cell size required for P-LINCS in a conservative
485 fashion. For high parallelization, it can be useful to set the
486 distance required for P-LINCS with ``-rcon``.
489 Sets the minimum allowed x, y and/or z scaling of the cells with
490 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
491 scale down by at least this factor. This option is used for the
492 automated spatial decomposition (when not using ``-dd``) as well as
493 for determining the number of grid pulses, which in turn sets the
494 minimum allowed cell size. Under certain circumstances the value
495 of ``-dds`` might need to be adjusted to account for high or low
496 spatial inhomogeneity of the system.
498 Finding out how to run mdrun better
499 -----------------------------------
500 TODO In future patch: red flags in log files, how to interpret wallcycle output
502 TODO In future patch: import wiki page stuff on performance checklist; maybe here,
505 Running mdrun with GPUs
506 -----------------------
507 TODO In future patch: any tips not covered above
509 Running the OpenCL version of mdrun
510 -----------------------------------
512 The current version works with GCN-based AMD GPUs, and NVIDIA CUDA
513 GPUs. Make sure that you have the latest drivers installed. The
514 minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
515 also the :ref:`known limitations <opencl-known-limitations>`.
517 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
518 used to select CUDA devices, or to define a mapping of GPUs to PP
519 ranks, is used for OpenCL devices.
521 The following devices are known to work correctly:
522 - AMD: FirePro W5100, HD 7950, FirePro W9100, Radeon R7 240,
523 Radeon R7 M260, Radeon R9 290
524 - NVIDIA: GeForce GTX 660M, GeForce GTX 660Ti, GeForce GTX 750Ti,
525 GeForce GTX 780, GTX Titan
527 Building an OpenCL program can take a significant amount of
528 time. NVIDIA implements a mechanism to cache the result of the
529 build. As a consequence, only the first run will take longer (because
530 of the kernel builds), and the following runs will be very fast. AMD
531 drivers, on the other hand, implement no caching and the initial phase
532 of running an OpenCL program can be very slow. This is not normally a
533 problem for long production MD, but you might prefer to do some kinds
534 of work on just the CPU (e.g. see ``-nb`` above).
536 Some other :ref:`OpenCL management <opencl-management>` environment
537 variables may be of interest to developers.
539 .. _opencl-known-limitations:
541 Known limitations of the OpenCL support
542 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
544 Limitations in the current OpenCL support of interest to |Gromacs| users:
546 - Using more than one GPU on a node is not supported
547 - Sharing a GPU between multiple PP ranks is not supported
548 - No Intel devices (CPUs, GPUs or Xeon Phi) are supported
549 - Due to blocking behavior of clEnqueue functions in the NVIDIA driver, there is
550 almost no performance gain when using NVIDIA GPUs. A bug report has already
551 been filled on about this issue. A possible workaround would be to have a
552 separate thread for issuing GPU commands. However this hasn't been implemented
555 Limitations of interest to |Gromacs| developers:
557 - The current implementation is not compatible with OpenCL devices that are
558 not using warp/wavefronts or for which the warp/wavefront size is not a
560 - Some Ewald tabulated kernels are known to produce incorrect results, so
561 (correct) analytical kernels are used instead.