1 # Getting good performance from `mdrun` #
3 The GROMACS build system and the `mdrun` tool has a lot of built-in
4 and configurable intelligence to detect your hardware and make pretty
5 effective use of that hardware. For a lot of casual and serious use of
6 `mdrun`, the automatic machinery works well enough. But to get the
7 most from your hardware to maximise your scientific quality, read on!
9 ## Hardware background information ##
11 Modern computer hardware is complex and heterogeneous, so we need to
12 discuss a little bit of background information and set up some
13 definitions. Experienced HPC users can skip this section.
16 : A hardware compute unit that actually executes instructions. There
17 is normally more than one core in a processor, often many more.
20 : A special kind of memory local to core(s) that is much faster to
21 access than main memory, kind of like the top of a human's desk,
22 compared to their filing cabinet. There are often several layers
23 of caches associated with a core.
26 : A group of cores that share some kind of locality, such as a shared
27 cache. This makes it more efficient to spread computational work
28 over cores within a socket than over cores in different
29 sockets. Modern processors often have more than one socket.
32 : A group of sockets that share coarser-level locality, such as shared
33 access to the same memory without requiring any network
34 hardware. A normal laptop or desktop computer is a node. A node
35 is often the smallest amount of a large compute cluster that a
36 user can request to use.
39 : A stream of instructions for a core to execute. There are
40 many different programming abstractions that create and manage
41 spreading computation over multiple threads, such as OpenMP,
42 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
43 hardware can map more than one software thread to a core; on Intel
44 x86 processors this is called "hyper-threading." Normally,
45 `mdrun` will not benefit from such mapping.
48 : On some kinds of hardware, software threads can migrate
49 between cores to help automatically balance workload. Normally,
50 the performance of `mdrun` will degrade dramatically if this is
51 permitted, so `mdrun` will by default set the affinity of its
52 threads to their cores, unless the user or software environment
53 has already done so. Setting thread affinity is sometimes called
54 "pinning" threads to cores.
57 : The dominant multi-node parallelization-scheme, which
58 provides a standardized language in which programs can be
59 written that work across more than one node.
62 : In MPI, a rank is the smallest grouping of hardware
63 used in the multi-node parallelization scheme. That grouping can
64 be controlled by the user, and might correspond to a core, a
65 socket, a node, or a group of nodes. The best choice varies with
66 the hardware, software and compute task. Sometimes an MPI rank is
67 called an MPI process.
70 : A graphics processing unit, which is often faster
71 and more efficient than conventional processors for particular
72 kinds of compute workloads. A GPU is always associated with a
73 particular node, and often a particular socket within that node.
76 : A standardized technique supported by many compilers
77 to share a compute workload over multiple cores. Often
78 combined with MPI to achieve hybrid MPI/OpenMP parallelism.
81 : A programming-language extension developed by Nvidia
82 for use in writing code for their GPUs.
85 : Modern CPU cores have instructions that can execute
86 large numbers of floating-point instructions in a single
90 ## GROMACS background information ##
92 The algorithms in `mdrun` and their implementations are most relevant
93 when choosing how to make good use of the hardware. For details,
94 see the Reference Manual. The most important of these are
96 Domain Decomposition (DD)
97 : This algorithm decomposes the (short-ranged) component of the
98 non-bonded interactions into domains that share spatial locality,
99 which permits efficient code to be written. Each domain handles
100 all of the particle-particle (PP) interactions for its members,
101 and is mapped to a single rank. Within a PP rank, OpenMP threads
102 can share the workload, or the work can be off-loaded to a
103 GPU. The PP rank also handles any bonded interactions for the
104 members of its domain. A GPU may perform work for more than one PP
105 rank, but it is normally most efficient to use a single PP rank
106 per GPU and for that rank to have thousands of atoms. When the
107 work of a PP rank is done on the CPU, mdrun will make extensive
108 use of the SIMD capabilities of the core. There are various
109 [command-line options](#controlling-the-domain-decomposition-algorithm)
110 to control the behaviour of the DD algorithm.
112 Particle-mesh Ewald (PME)
113 : This algorithm treats the long-ranged components of the non-bonded
114 interactions (Coulomb and/or Lennard-Jones). Either all, or just
115 a subset of ranks may participate in the work for computing
116 long-ranged component (often inaccurately called simple the "PME"
117 component). Because the algorithm uses a 3D FFT that requires
118 global communication, its performance gets worse as more ranks
119 participate, which can mean it is fastest to use just a subset of
120 ranks (e.g. one-quarter to one-half of the ranks). If there are
121 separate PME ranks, then the remaining ranks handle the PP
122 work. Otherwise, all ranks do both PP and PME work.
124 ## Running mdrun within a single node ##
126 `mdrun` can be configured and compiled in several different ways that
127 are efficient to use within a single node. The default configuration
128 using a suitable compiler will deploy a multi-level hybrid parallelism
129 that uses CUDA, OpenMP and the threading platform native to the
130 hardware. For programming convenience, in GROMACS, those native
131 threads are used to implement on a single node the same MPI scheme as
132 would be used between nodes, but much more efficient; this is called
133 thread-MPI. From a user's perspective, real MPI and thread-MPI look
134 almost the same, and GROMACS refers to MPI ranks to mean either kind,
135 except where noted. A real external MPI can be used for `mdrun` within
136 a single node, but runs more slowly than the thread-MPI version.
138 By default, `mdrun` will inspect the hardware available at run time
139 and do its best to make fairly efficient use of the whole node. The
140 log file, stdout and stderr are used to print diagnostics that
141 inform the user about the choices made and possible consequences.
143 A number of command-line parameters are available to vary the default
147 : The total number of threads to use. The default, 0, will start as
148 many threads as available cores. Whether the threads are
149 thread-MPI ranks, or OpenMP threads within such ranks depends on
153 : The total number of thread-MPI ranks to use. The default, 0,
154 will start one rank per GPU (if present), and otherwise one rank
158 : The total number of OpenMP threads per rank to start. The
159 default, 0, will start one thread on each available core.
160 Alternatively, mdrun will honour the appropriate system
161 environment variable (e.g. `OMP_NUM_THREADS`) if set.
164 : The total number of ranks to dedicate to the long-ranged
165 component of PME, if used. The default, -1, will dedicate ranks
166 only if the total number of threads is at least 12, and will use
167 around one-third of the ranks for the long-ranged component.
170 : When using PME with separate PME ranks,
171 the total number of OpenMP threads per separate PME ranks.
172 The default, 0, copies the value from `-ntomp`.
175 : A string that specifies the ID numbers of the GPUs to be
176 used by corresponding PP ranks on this node. For example,
177 "0011" specifies that the lowest two PP ranks use GPU 0,
178 and the other two use GPU 1.
181 : Can be set to "auto," "on" or "off" to control whether
182 mdrun will attempt to set the affinity of threads to cores.
183 Defaults to "auto," which means that if mdrun detects that all the
184 cores on the node are being used for mdrun, then it should behave
185 like "on," and attempt to set the affinities (unless they are
186 already set by something else).
189 : If `-pin on`, specifies the logical core number to
190 which mdrun should pin the first thread. When running more than
191 one instance of mdrun on a node, use this option to to avoid
192 pinning threads from different mdrun instances to the same core.
195 : If `-pin on`, specifies the stride in logical core
196 numbers for the cores to which mdrun should pin its threads. When
197 running more than one instance of mdrun on a node, use this option
198 to to avoid pinning threads from different mdrun instances to the
199 same core. Use the default, 0, to minimize the number of threads
200 per physical core - this lets mdrun manage the hardware-, OS- and
201 configuration-specific details of how to map logical cores to
205 : Can be set to "interleave," "pp_pme" or "cartesian."
206 Defaults to "interleave," which means that any separate PME ranks
207 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
208 PME, ... etc. This generally makes the best use of the available
209 hardware. "pp_pme" maps all PP ranks first, then all PME
210 ranks. "cartesian" is a special-purpose mapping generally useful
211 only on special torus networks with accelerated global
212 communication for Cartesian communicators. Has no effect if there
213 are no separate PME ranks.
216 : Can be set to "auto", "cpu", "gpu", "cpu_gpu."
217 Defaults to "auto," which uses a compatible GPU if available.
218 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
219 that a compatible GPU be available and will be used. Setting
220 "cpu_gpu" permits the CPU to execute a GPU-like code path, which
221 will run slowly on the CPU and should only be used for debugging.
223 ### Examples for mdrun on one node
226 Starts mdrun using all the available resources. mdrun
227 will automatically choose a fairly efficient division
228 into thread-MPI ranks, OpenMP threads and assign work
229 to compatible GPUs. Details will vary with hardware
230 and the kind of simulation being run.
233 Starts mdrun using 8 threads, which might be thread-MPI
234 or OpenMP threads depending on hardware and the kind
235 of simulation being run.
237 mdrun -ntmpi 2 -ntomp 4
238 Starts mdrun using eight total threads, with four thread-MPI
239 ranks and two OpenMP threads per core. You should only use
240 these options when seeking optimal performance, and
241 must take care that the ranks you create can have
242 all of their OpenMP threads run on the same socket.
243 The number of ranks must be a multiple of the number of
244 sockets, and the number of cores per node must be
245 a multiple of the number of threads per rank.
248 Starts mdrun using GPUs with IDs 1 and 2 (e.g. because
249 GPU 0 is dedicated to running a display). This requires
250 two thread-MPI ranks, and will split the available
251 CPU cores between them using OpenMP threads.
253 mdrun -ntmpi 4 -gpu_id "1122"
254 Starts mdrun using four thread-MPI ranks, and maps them
255 to GPUs with IDs 1 and 2. The CPU cores available will
256 be split evenly between the ranks using OpenMP threads.
258 mdrun -nt 6 -pin on -pinoffset 0
259 mdrun -nt 6 -pin on -pinoffset 3
260 Starts two mdrun processes, each with six total threads.
261 Threads will have their affinities set to particular
262 logical cores, beginning from the logical core
263 with rank 0 or 3, respectively. The above would work
264 well on an Intel CPU with six physical cores and
265 hyper-threading enabled. Use this kind of setup only
266 if restricting mdrun to a subset of cores to share a
267 node with other processes.
270 When using an `mdrun_mpi` compiled with external MPI,
271 this will start two ranks and as many OpenMP threads
272 as the hardware and MPI setup will permit. If the
273 MPI setup is restricted to one node, then the resulting
274 `mdrun_mpi` will be local to that node.
276 ## Running mdrun on more than one node ##
278 This requires configuring GROMACS to build with an external MPI
279 library. By default, this mdrun executable will be named
280 `mdrun_mpi`. All of the considerations for running single-node
281 mdrun still apply, except that `-ntmpi` and `-nt` cause a fatal
282 error, and instead the number of ranks is controlled by the
284 Settings such as `-npme` are much more important when
285 using multiple nodes. Configuring the MPI environment to
286 produce one rank per core is generally good until one
287 approaches the strong-scaling limit. At that point, using
288 OpenMP to spread the work of an MPI rank over more than one
289 core is needed to continue to improve absolute performance.
290 The location of the scaling limit depends on the processor,
291 presence of GPUs, network, and simulation algorithm, but
292 it is worth measuring at around ~200 atoms/core if you
293 need maximum throughput.
295 There are further command-line parameters that are relevant in these
299 : If "on," will optimize various aspects of the PME
300 and DD algorithms, shifting load between ranks and/or
301 GPUs to maximize throughput
304 : Can be used to limit global communication every n steps. This can
305 improve performance for highly parallel simulations where this global
306 communication step becomes the bottleneck. For a global thermostat
307 and/or barostat, the temperature and/or pressure will also only be
308 updated every `-gcom` steps. By default, it is set to the
309 minimum of `nstcalcenergy` and `nstlist`.
311 The [gmx tune_pme] utility is available to search a wider
312 range of parameter space, including making safe
313 modifications to the [.tpr] file, and varying `-npme`.
314 It is only aware of the number of ranks created by
315 the MPI environment, and does not explicitly manage
316 any aspect of OpenMP during the optimization.
318 ### Examples for mdrun on more than one node ##
320 The examples and explanations for for single-node mdrun are
321 still relevant, but `-nt` is no longer the way
322 to choose the number of MPI ranks.
324 mpirun -np 16 mdrun_mpi
325 Starts `mdrun_mpi` with 16 ranks, which are mapped to
326 the hardware by the MPI library, e.g. as specified
327 in an MPI hostfile. The available cores will be
328 automatically split among ranks using OpenMP threads,
329 depending on the hardware and any environment settings
330 such as `OMP_NUM_THREADS`.
332 mpirun -np 16 mdrun_mpi -npme 5
333 Starts `mdrun_mpi` with 16 ranks, as above, and
334 require that 5 of them are dedicated to the PME
337 mpirun -np 11 mdrun_mpi -ntomp 2 -npme 6 -ntomp_pme 1
338 Starts `mdrun_mpi` with 11 ranks, as above, and
339 require that six of them are dedicated to the PME
340 component with one OpenMP thread each. The remaining
341 five do the PP component, with two OpenMP threads
344 mpirun -np 4 mdrun -ntomp 6 -gpu_id 00
345 Starts `mdrun_mpi` on a machine with two nodes, using
346 four total ranks, each rank with six OpenMP threads,
347 and both ranks on a node sharing GPU with ID 0.
349 mpirun -np 8 mdrun -ntomp 3 -gpu_id 0000
350 Starts `mdrun_mpi` on a machine with two nodes, using
351 eight total ranks, each rank with three OpenMP threads,
352 and all four ranks on a node sharing GPU with ID 0.
353 This may or may not be faster than the previous setup
354 on the same hardware.
356 mpirun -np 20 mdrun_mpi -ntomp 4 -gpu_id 0
357 Starts `mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
358 across ranks each to one OpenMP thread. This setup is likely to be
359 suitable when there are ten nodes, each with one GPU, and each node
362 mpirun -np 20 mdrun_mpi -gpu_id 00
363 Starts `mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
364 across ranks each to one OpenMP thread. This setup is likely to be
365 suitable when there are ten nodes, each with one GPU, and each node
368 mpirun -np 20 mdrun_mpi -gpu_id 01
369 Starts `mdrun_mpi` with 20 ranks. This setup is likely
370 to be suitable when there are ten nodes, each with two
373 mpirun -np 40 mdrun_mpi -gpu_id 0011
374 Starts `mdrun_mpi` with 40 ranks. This setup is likely
375 to be suitable when there are ten nodes, each with two
376 GPUs, and OpenMP performs poorly on the hardware.
378 ## Controlling the domain decomposition algorithm
380 This section lists all the options that affect how the domain
381 decomposition algorithm decomposes the workload to the available
385 : Can be used to set the required maximum distance for inter
386 charge-group bonded interactions. Communication for two-body
387 bonded interactions below the non-bonded cut-off distance always
388 comes for free with the non-bonded communication. Atoms beyond
389 the non-bonded cut-off are only communicated when they have
390 missing bonded interactions; this means that the extra cost is
391 minor and nearly indepedent of the value of `-rdd`. With dynamic
392 load balancing, option `-rdd` also sets the lower limit for the
393 domain decomposition cell sizes. By default `-rdd` is determined
394 by [mdrun] based on the initial coordinates. The chosen value will
395 be a balance between interaction range and communication cost.
398 : On by default. When inter charge-group bonded interactions are
399 beyond the bonded cut-off distance, [mdrun] terminates with an
400 error message. For pair interactions and tabulated bonds that do
401 not generate exclusions, this check can be turned off with the
405 : When constraints are present, option `-rcon` influences
406 the cell size limit as well.
407 Atoms connected by NC constraints, where NC is the LINCS order
408 plus 1, should not be beyond the smallest cell size. A error
409 message is generated when this happens, and the user should change
410 the decomposition or decrease the LINCS order and increase the
411 number of LINCS iterations. By default [mdrun] estimates the
412 minimum cell size required for P-LINCS in a conservative
413 fashion. For high parallelization, it can be useful to set the
414 distance required for P-LINCS with `-rcon`.
417 : Sets the minimum allowed x, y and/or z scaling of the cells with
418 dynamic load balancing. [mdrun] will ensure that the cells can
419 scale down by at least this factor. This option is used for the
420 automated spatial decomposition (when not using `-dd`) as well as
421 for determining the number of grid pulses, which in turn sets the
422 minimum allowed cell size. Under certain circumstances the value
423 of `-dds` might need to be adjusted to account for high or low
424 spatial inhomogeneity of the system.
426 ## Finding out how to run mdrun better
428 TODO In future patch: red flags in log files, how to interpret wallcycle output
430 TODO In future patch: import wiki page stuff on performance checklist; maybe here,
433 ## Running mdrun with GPUs
435 TODO In future patch: any tips not covered above