4 |Gromacs| programs may be influenced by the use of
5 environment variables. First of all, the variables set in
6 the ``GMXRC`` file are essential for running and
7 compiling |Gromacs|. Some other useful environment variables are
8 listed in the following sections. Most environment variables function
9 by being set in your shell to any non-NULL value. Specific
10 requirements are described below if other values need to be set. You
11 should consult the documentation for your shell for instructions on
12 how to set environment variables in the current shell, or in configuration
13 files for future shells. Note that requirements for exporting
14 environment variables to jobs run under batch control systems vary and
15 you should consult your local documentation for details.
20 Print constraint virial and force virial energy terms.
23 |Gromacs| automatically backs up old
24 copies of files when trying to write a new file of the same
25 name, and this variable controls the maximum number of
26 backups that will be made, default 99. If set to 0 it fails to
27 run if any output file already exists. And if set to -1 it
28 overwrites any output file without making a backup.
31 if this is explicitly set, no cool quotes
32 will be printed at the end of a program.
35 prevent dumping of step files during
36 (for example) blowing up during failure of constraint
40 dump all configurations to a :ref:`pdb`
41 file that have an interaction energy less than the value set
42 in this environment variable.
45 ``GMX_VIEW_XVG``, ``GMX_VIEW_EPS`` and ``GMX_VIEW_PDB``, commands used to
46 automatically view :ref:`xvg`, :ref:`xpm`, :ref:`eps`
47 and :ref:`pdb` file types, respectively; they default to ``xv``, ``xmgrace``,
48 ``ghostview`` and ``rasmol``. Set to empty to disable
49 automatic viewing of a particular file type. The command will
50 be forked off and run in the background at the same priority
51 as the |Gromacs| tool (which might not be what you want).
52 Be careful not to use a command which blocks the terminal
53 (e.g. ``vi``), since multiple instances might be run.
55 ``GMX_VIRIAL_TEMPERATURE``
56 print virial temperature energy term
59 the size of the buffer for file I/O. When set
60 to 0, all file I/O will be unbuffered and therefore very slow.
61 This can be handy for debugging purposes, because it ensures
62 that all files are always totally up-to-date.
65 set display color for logo in :ref:`gmx view`.
67 ``GMX_PRINT_LONGFORMAT``
68 use long float format when printing
72 Applies for computational electrophysiology setups
73 only (see reference manual). The initial structure gets dumped to
74 :ref:`pdb` file, which allows to check whether multimeric channels have
75 the correct PBC representation.
77 ``GMX_TRAJECTORY_IO_VERBOSITY``
78 Defaults to 1, which prints frame count e.g. when reading trajectory
79 files. Set to 0 for quiet operation.
81 ``GMX_ENABLE_GPU_TIMING``
82 Enables GPU timings in the log file for CUDA. Note that CUDA timings
83 are incorrect with multiple streams, as happens with domain
84 decomposition or with both non-bondeds and PME on the GPU (this is
85 also the main reason why they are not turned on by default).
87 ``GMX_DISABLE_GPU_TIMING``
88 Disables GPU timings in the log file for OpenCL.
92 ``GMX_PRINT_DEBUG_LINES``
93 when set, print debugging info on line numbers.
96 number of steps that elapse between dumping
97 the current DD to a PDB file (default 0). This only takes effect
98 during domain decomposition, so it should typically be
99 0 (never), 1 (every DD phase) or a multiple of :mdp:`nstlist`.
101 ``GMX_DD_NST_DUMP_GRID``
102 number of steps that elapse between dumping
103 the current DD grid to a PDB file (default 0). This only takes effect
104 during domain decomposition, so it should typically be
105 0 (never), 1 (every DD phase) or a multiple of :mdp:`nstlist`.
108 general debugging trigger for every domain
109 decomposition (default 0, meaning off). Currently only checks
110 global-local atom index mapping for consistency.
113 over-ride the number of DD pulses used
114 (default 0, meaning no over-ride). Normally 1 or 2.
116 There are a number of extra environment variables like these
117 that are used in debugging - check the code!
119 Performance and Run Control
120 ---------------------------
121 ``GMX_DO_GALACTIC_DYNAMICS``
122 planetary simulations are made possible (just for fun) by setting
123 this environment variable, which allows setting :mdp:`epsilon-r` to -1 in the :ref:`mdp`
124 file. Normally, :mdp:`epsilon-r` must be greater than zero to prevent a fatal error.
125 See webpage_ for example input files for a planetary simulation.
127 ``GMX_ALLOW_CPT_MISMATCH``
128 when set, runs will not exit if the
129 ensemble set in the :ref:`tpr` file does not match that of the
132 ``GMX_CUDA_NB_EWALD_TWINCUT``
133 force the use of twin-range cutoff kernel even if :mdp:`rvdw` equals
134 :mdp:`rcoulomb` after PP-PME load balancing. The switch to twin-range kernels is automated,
135 so this variable should be used only for benchmarking.
137 ``GMX_CUDA_NB_ANA_EWALD``
138 force the use of analytical Ewald kernels. Should be used only for benchmarking.
140 ``GMX_CUDA_NB_TAB_EWALD``
141 force the use of tabulated Ewald kernels. Should be used only for benchmarking.
143 ``GMX_CUDA_STREAMSYNC``
144 force the use of cudaStreamSynchronize on ECC-enabled GPUs, which leads
145 to performance loss due to a known CUDA driver bug present in API v5.0 NVIDIA drivers (pre-30x.xx).
146 Cannot be set simultaneously with ``GMX_NO_CUDA_STREAMSYNC``.
148 ``GMX_DISABLE_CUDALAUNCH``
149 disable the use of the lower-latency cudaLaunchKernel API even when supported (CUDA >=v7.0).
150 Should only be used for benchmarking purposes.
152 ``GMX_DISABLE_CUDA_TIMING``
153 Disables GPU timing of CUDA tasks; synonymous with ``GMX_DISABLE_GPU_TIMING``.
156 times all code during runs. Incompatible with threads.
158 ``GMX_CYCLE_BARRIER``
159 calls MPI_Barrier before each cycle start/stop call.
162 build domain decomposition cells in the order
163 (z, y, x) rather than the default (x, y, z).
165 ``GMX_DD_USE_SENDRECV2``
166 during constraint and vsite communication, use a pair
167 of ``MPI_Sendrecv`` calls instead of two simultaneous non-blocking calls
168 (default 0, meaning off). Might be faster on some MPI implementations.
170 ``GMX_DLB_BASED_ON_FLOPS``
171 do domain-decomposition dynamic load balancing based on flop count rather than
172 measured time elapsed (default 0, meaning off).
173 This makes the load balancing reproducible, which can be useful for debugging purposes.
174 A value of 1 uses the flops; a value > 1 adds (value - 1)*5% of noise to the flops to increase the imbalance and the scaling.
176 ``GMX_DLB_MAX_BOX_SCALING``
177 maximum percentage box scaling permitted per domain-decomposition
178 load-balancing step (default 10)
180 ``GMX_DD_RECORD_LOAD``
181 record DD load statistics for reporting at end of the run (default 1, meaning on)
183 ``GMX_DETAILED_PERF_STATS``
184 when set, print slightly more detailed performance information
185 to the :ref:`log` file. The resulting output is the way performance summary is reported in versions
186 4.5.x and thus may be useful for anyone using scripts to parse :ref:`log` files or standard output.
188 ``GMX_DISABLE_SIMD_KERNELS``
189 disables architecture-specific SIMD-optimized (SSE2, SSE4.1, AVX, etc.)
190 non-bonded kernels thus forcing the use of plain C kernels.
192 ``GMX_DISABLE_GPU_TIMING``
193 timing of asynchronously executed GPU operations can have a
194 non-negligible overhead with short step times. Disabling timing can improve performance in these cases.
196 ``GMX_DISABLE_GPU_DETECTION``
197 when set, disables GPU detection even if :ref:`gmx mdrun` was compiled
200 ``GMX_GPU_APPLICATION_CLOCKS``
201 setting this variable to a value of "0", "ON", or "DISABLE" (case insensitive)
202 allows disabling the CUDA GPU allication clock support.
204 ``GMX_DISRE_ENSEMBLE_SIZE``
205 the number of systems for distance restraint ensemble
206 averaging. Takes an integer value.
209 emulate GPU runs by using algorithmically equivalent CPU reference code instead of
210 GPU-accelerated functions. As the CPU code is slow, it is intended to be used only for debugging purposes.
213 disable exiting upon encountering a corrupted frame in an :ref:`edr`
214 file, allowing the use of all frames up until the corruption.
217 update forces when invoking ``mdrun -rerun``.
220 set in the same way as ``mdrun -gpu_id``, ``GMX_GPU_ID``
221 allows the user to specify different GPU IDs for different ranks, which can be useful for selecting different
222 devices on different compute nodes in a cluster. Cannot be used in conjunction with ``mdrun -gpu_id``.
225 set in the same way as ``mdrun -gputasks``, ``GMX_GPUTASKS`` allows the mapping
226 of GPU tasks to GPU device IDs to be different on different ranks, if e.g. the MPI
227 runtime permits this variable to be different for different ranks. Cannot be used
228 in conjunction with ``mdrun -gputasks``. Has all the same requirements as ``mdrun -gputasks``.
230 ``GMX_IGNORE_FSYNC_FAILURE_ENV``
231 allow :ref:`gmx mdrun` to continue even if
235 when set to a floating-point value, overrides the default tolerance of
236 1e-5 for force-field floating-point parameters.
238 ``GMX_MAXCONSTRWARN``
239 if set to -1, :ref:`gmx mdrun` will
240 not exit if it produces too many LINCS warnings.
243 use the generic C kernel. Should be set if using
244 the group-based cutoff scheme and also sets ``GMX_NO_SOLV_OPT`` to be true,
245 thus disabling solvent optimizations as well.
248 neighbor list balancing parameter used when running on GPU. Sets the
249 target minimum number pair-lists in order to improve multi-processor load-balance for better
250 performance with small simulation systems. Must be set to a non-negative integer,
251 the 0 value disables list splitting.
252 The default value is optimized for supported GPUs (NVIDIA Fermi to Maxwell),
253 therefore changing it is not necessary for normal usage, but it can be useful on future architectures.
256 use neighbor list and kernels based on charge groups.
259 when set, print detailed neighbor search cycle counting.
261 ``GMX_NBNXN_EWALD_ANALYTICAL``
262 force the use of analytical Ewald non-bonded kernels,
263 mutually exclusive of ``GMX_NBNXN_EWALD_TABLE``.
265 ``GMX_NBNXN_EWALD_TABLE``
266 force the use of tabulated Ewald non-bonded kernels,
267 mutually exclusive of ``GMX_NBNXN_EWALD_ANALYTICAL``.
269 ``GMX_NBNXN_SIMD_2XNN``
270 force the use of 2x(N+N) SIMD CPU non-bonded kernels,
271 mutually exclusive of ``GMX_NBNXN_SIMD_4XN``.
273 ``GMX_NBNXN_SIMD_4XN``
274 force the use of 4xN SIMD CPU non-bonded kernels,
275 mutually exclusive of ``GMX_NBNXN_SIMD_2XNN``.
278 disables optimized all-vs-all kernels.
280 ``GMX_NO_CART_REORDER``
281 used in initializing domain decomposition communicators. Rank reordering
282 is default, but can be switched off with this environment variable.
284 ``GMX_NO_LJ_COMB_RULE``
285 force the use of LJ paremeter lookup instead of using combination rules
286 in the non-bonded kernels.
288 ``GMX_NO_CUDA_STREAMSYNC``
289 the opposite of ``GMX_CUDA_STREAMSYNC``. Disables the use of the
290 standard cudaStreamSynchronize-based GPU waiting to improve performance when using CUDA driver API
291 ealier than v5.0 with ECC-enabled GPUs.
293 ``GMX_NO_INT``, ``GMX_NO_TERM``, ``GMX_NO_USR1``
294 disable signal handlers for SIGINT,
295 SIGTERM, and SIGUSR1, respectively.
298 do not use separate inter- and intra-node communicators.
301 skip non-bonded calculations; can be used to estimate the possible
302 performance gain from adding a GPU accelerator to the current hardware setup -- assuming that this is
303 fast enough to complete the non-bonded calculations while the CPU does bonded force and PME computation.
304 Freezing the particles will be required to stop the system blowing up.
307 when set, do not add virial contribution to COM pull forces.
310 shell positions are not predicted.
313 turns off solvent optimizations; automatic if ``GMX_NB_GENERIC``
317 the ideal number of charge groups per neighbor searching grid cell is hard-coded
318 to a value of 10. Setting this environment variable to any other integer value overrides this hard-coded
322 set the number of OpenMP or PME threads (overrides the number guessed by
326 use P3M-optimized influence function instead of smooth PME B-spline interpolation.
328 ``GMX_PME_THREAD_DIVISION``
329 PME thread division in the format "x y z" for all three dimensions. The
330 sum of the threads in each dimension must equal the total number of PME threads (set in
334 if the number of domain decomposition cells is set to 1 for both x and y,
335 decompose PME in one dimension.
337 ``GMX_REQUIRE_SHELL_INIT``
338 require that shell positions are initiated.
340 ``GMX_REQUIRE_TABLES``
341 require the use of tabulated Coulombic
342 and van der Waals interactions.
345 the minimum value for soft-core sigma. **Note** that this value is set
346 using the :mdp:`sc-sigma` keyword in the :ref:`mdp` file, but this environment variable can be used
347 to reproduce pre-4.5 behavior with respect to this parameter.
350 should contain multiple masses used for test particle insertion into a cavity.
351 The center of mass of the last atoms is used for insertion into the cavity.
354 use graph for bonded interactions.
356 ``GMX_VERLET_BUFFER_RES``
357 resolution of buffer size in Verlet cutoff scheme. The default value is
358 0.001, but can be overridden with this environment variable.
361 Not strictly a |Gromacs| environment variable, but on large machines
362 the hwloc detection can take a few seconds if you have lots of MPI processes.
363 If you run the hwloc command `lstopo out.xml` and set this environment
364 variable to point to the location of this file, the hwloc library will use
365 the cached information instead, which can be faster.
368 the ``mpirun`` command used by :ref:`gmx tune_pme`.
371 the :ref:`gmx mdrun` command used by :ref:`gmx tune_pme`.
373 ``GMX_DISABLE_DYNAMICPRUNING``
374 disables dynamic pair-list pruning. Note that :ref:`gmx mdrun` will
375 still tune nstlist to the optimal value picked assuming dynamic pruning. Thus
376 for good performance the -nstlist option should be used.
378 ``GMX_NSTLIST_DYNAMICPRUNING``
379 overrides the dynamic pair-list pruning interval chosen heuristically
380 by mdrun. Values should be between the pruning frequency value
381 (1 for CPU and 2 for GPU) and :mdp:`nstlist` ``- 1``.
383 ``GMX_USE_TREEREDUCE``
384 use tree reduction for nbnxn force reduction. Potentially faster for large number of
385 OpenMP threads (if memory locality is important).
387 .. _opencl-management:
391 Currently, several environment variables exist that help customize some aspects
392 of the OpenCL_ version of |Gromacs|. They are mostly related to the runtime
393 compilation of OpenCL kernels, but they are also used in device selection.
395 ``GMX_OCL_NOGENCACHE``
396 If set, disable caching for OpenCL kernel builds. Caching is
397 normally useful so that future runs can re-use the compiled
398 kernels from previous runs. Currently, caching is always
399 disabled, until we solve concurrency issues.
402 Enable OpenCL binary caching. Only intended to be used for
403 development and (expert) testing as neither concurrency
404 nor cache invalidation is implemented safely!
406 ``GMX_OCL_NOFASTGEN``
407 If set, generate and compile all algorithm flavors, otherwise
408 only the flavor required for the simulation is generated and
411 ``GMX_OCL_DISABLE_FASTMATH``
412 Prevents the use of ``-cl-fast-relaxed-math`` compiler option.
415 If defined, the OpenCL build log is always written to the
416 mdrun log file. Otherwise, the build log is written to the
417 log file only when an error occurs.
420 If defined, it enables verbose mode for OpenCL kernel build.
421 Currently available only for NVIDIA GPUs. See ``GMX_OCL_DUMP_LOG``
422 for details about how to obtain the OpenCL build log.
424 ``GMX_OCL_DUMP_INTERM_FILES``
426 If defined, intermediate language code corresponding to the
427 OpenCL build process is saved to file. Caching has to be
428 turned off in order for this option to take effect (see
429 ``GMX_OCL_NOGENCACHE``).
431 - NVIDIA GPUs: PTX code is saved in the current directory
432 with the name ``device_name.ptx``
433 - AMD GPUs: ``.IL/.ISA`` files will be created for each OpenCL
434 kernel built. For details about where these files are
435 created check AMD documentation for ``-save-temps`` compiler
439 Use in conjunction with ``OCL_FORCE_CPU`` or with an AMD device.
440 It adds the debug flag to the compiler options (-g).
443 Disable optimisations. Adds the option ``cl-opt-disable`` to the
446 ``GMX_OCL_FORCE_CPU``
447 Force the selection of a CPU device instead of a GPU. This
448 exists only for debugging purposes. Do not expect |Gromacs| to
449 function properly with this option on, it is solely for the
450 simplicity of stepping in a kernel and see what is happening.
452 ``GMX_OCL_DISABLE_I_PREFETCH``
453 Disables i-atom data (type or LJ parameter) prefetch allowig
456 ``GMX_OCL_ENABLE_I_PREFETCH``
457 Enables i-atom data (type or LJ parameter) prefetch allowig
458 testing on platforms where this behavior is not default.
460 ``GMX_OCL_NB_ANA_EWALD``
461 Forces the use of analytical Ewald kernels. Equivalent of
462 CUDA environment variable ``GMX_CUDA_NB_ANA_EWALD``
464 ``GMX_OCL_NB_TAB_EWALD``
465 Forces the use of tabulated Ewald kernel. Equivalent
466 of CUDA environment variable ``GMX_OCL_NB_TAB_EWALD``
468 ``GMX_OCL_NB_EWALD_TWINCUT``
469 Forces the use of twin-range cutoff kernel. Equivalent of
470 CUDA environment variable ``GMX_CUDA_NB_EWALD_TWINCUT``
472 ``GMX_OCL_FILE_PATH``
473 Use this parameter to force |Gromacs| to load the OpenCL
474 kernels from a custom location. Use it only if you want to
475 override |Gromacs| default behavior, or if you want to test
478 ``GMX_OCL_DISABLE_COMPATIBILITY_CHECK``
479 Disables the hardware compatibility check. Useful for developers
480 and allows testing the OpenCL kernels on non-supported platforms
481 (like Intel iGPUs) without source code modification.
483 Analysis and Core Functions
484 ---------------------------
486 accuracy in Gaussian L510 (MC-SCF) component program.
488 ``GMX_QM_ORCA_BASENAME``
489 prefix of :ref:`tpr` files, used in Orca calculations
490 for input and output file names.
493 when set to a nonzero value, Gaussian QM calculations will
494 iteratively solve the CP-MCSCF equations.
496 ``GMX_QM_MODIFIED_LINKS_DIR``
497 location of modified links in Gaussian.
500 used by :ref:`gmx do_dssp` to point to the ``dssp``
501 executable (not just its path).
504 directory where Gaussian is installed.
507 name of the Gaussian executable.
509 ``GMX_DIPOLE_SPACING``
510 spacing used by :ref:`gmx dipoles`.
513 sets the maximum number of residues to be renumbered by
514 :ref:`gmx grompp`. A value of -1 indicates all residues should be renumbered.
516 ``GMX_FFRTP_TER_RENAME``
517 Some force fields (like AMBER) use specific names for N- and C-
518 terminal residues (NXXX and CXXX) as :ref:`rtp` entries that are normally renamed. Setting
519 this environment variable disables this renaming.
522 ``gunzip`` executable, used by :ref:`gmx wham`.
525 name of X11 font used by :ref:`gmx view`.
528 the time unit used in output files, can be
529 anything in fs, ps, ns, us, ms, s, m or h.
531 ``GMX_QM_GAUSSIAN_MEMORY``
532 memory used for Gaussian QM calculation.
535 name of the ``multiprot`` executable, used by the
536 contributed program ``do_multiprot``.
539 number of CPUs to be used for Gaussian QM calculation
542 directory where Orca is installed.
545 simulated annealing step size for Gaussian QM calculation.
547 ``GMX_QM_GROUND_STATE``
548 defines state for Gaussian surface hopping calculation.
551 name of the ``total`` executable used by the contributed
552 ``do_shift`` program.
555 make :ref:`gmx energy` and :ref:`gmx eneconv`
559 where to find VMD plug-ins. Needed to be
560 able to read file formats recognized only by a VMD plug-in.
563 base path of VMD installation.
566 sets viewer to ``xmgr`` (deprecated) instead of ``xmgrace``.