Getting good performance from mdrun
===================================
-The GROMACS build system and the :ref:`gmx mdrun` tool has a lot of built-in
+The |Gromacs| build system and the :ref:`gmx mdrun` tool has a lot of built-in
and configurable intelligence to detect your hardware and make pretty
effective use of that hardware. For a lot of casual and serious use of
:ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
spreading computation over multiple threads, such as OpenMP,
pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
hardware can map more than one software thread to a core; on
- Intel x86 processors this is called "hyper-threading."
- Normally, :ref:`gmx mdrun` will not benefit from such mapping.
-
- affinity
- On some kinds of hardware, software threads can migrate
- between cores to help automatically balance
- workload. Normally, the performance of :ref:`gmx mdrun` will degrade
- dramatically if this is permitted, so :ref:`gmx mdrun` will by default
- set the affinity of its threads to their cores, unless the
- user or software environment has already done so. Setting
- thread affinity is sometimes called "pinning" threads to
- cores.
+ Intel x86 processors this is called "hyper-threading", while
+ the more general concept is often called SMT for
+ "simultaneous multi-threading". IBM Power8 can for instance use
+ up to 8 hardware threads per core.
+ This feature can usually be enabled or disabled either in
+ the hardware bios or through a setting in the Linux operating
+ system. |Gromacs| can typically make use of this, for a moderate
+ free performance boost. In most cases it will be
+ enabled by default e.g. on new x86 processors, but in some cases
+ the system administrators might have disabled it. If that is the
+ case, ask if they can re-enable it for you. If you are not sure
+ if it is enabled, check the output of the CPU information in
+ the log file and compare with CPU specifications you find online.
+
+ thread affinity (pinning)
+ By default, most operating systems allow software threads to migrate
+ between cores (or hardware threads) to help automatically balance
+ workload. However, the performance of :ref:`gmx mdrun` can deteriorate
+ if this is permitted and will degrade dramatically especially when
+ relying on multi-threading within a rank. To avoid this,
+ :ref:`gmx mdrun` will by default
+ set the affinity of its threads to individual cores/hardware threads,
+ unless the user or software environment has already done so
+ (or not the entire node is used for the run, i.e. there is potential
+ for node sharing).
+ Setting thread affinity is sometimes called thread "pinning".
MPI
The dominant multi-node parallelization-scheme, which provides
MPI to achieve hybrid MPI/OpenMP parallelism.
CUDA
- A programming-language extension developed by Nvidia
- for use in writing code for their GPUs.
+ A proprietary parallel computing framework and API developed by NVIDIA
+ that allows targeting their accelerator hardware.
+ |Gromacs| uses CUDA for GPU acceleration support with NVIDIA hardware.
+
+ OpenCL
+ An open standard-based parallel computing framework that consists
+ of a C99-based compiler and a programming API for targeting heterogeneous
+ and accelerator hardware. |Gromacs| uses OpenCL for GPU acceleration
+ on AMD devices (both GPUs and APUs); NVIDIA hardware is also supported.
SIMD
Modern CPU cores have instructions that can execute large
numbers of floating-point instructions in a single cycle.
-GROMACS background information
-------------------------------
+|Gromacs| background information
+--------------------------------
The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
when choosing how to make good use of the hardware. For details,
see the Reference Manual. The most important of these are
Domain Decomposition
The domain decomposition (DD) algorithm decomposes the
(short-ranged) component of the non-bonded interactions into
- domains that share spatial locality, which permits efficient
- code to be written. Each domain handles all of the
+ domains that share spatial locality, which permits the use of
+ efficient algorithms. Each domain handles all of the
particle-particle (PP) interactions for its members, and is
- mapped to a single rank. Within a PP rank, OpenMP threads can
- share the workload, or the work can be off-loaded to a
+ mapped to a single MPI rank. Within a PP rank, OpenMP threads
+ can share the workload, and some work can be off-loaded to a
GPU. The PP rank also handles any bonded interactions for the
members of its domain. A GPU may perform work for more than
one PP rank, but it is normally most efficient to use a single
are efficient to use within a single :term:`node`. The default configuration
using a suitable compiler will deploy a multi-level hybrid parallelism
that uses CUDA, OpenMP and the threading platform native to the
-hardware. For programming convenience, in GROMACS, those native
+hardware. For programming convenience, in |Gromacs|, those native
threads are used to implement on a single node the same MPI scheme as
would be used between nodes, but much more efficient; this is called
thread-MPI. From a user's perspective, real MPI and thread-MPI look
-almost the same, and GROMACS refers to MPI ranks to mean either kind,
+almost the same, and |Gromacs| refers to MPI ranks to mean either kind,
except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
a single node, but runs more slowly than the thread-MPI version.
log file, stdout and stderr are used to print diagnostics that
inform the user about the choices made and possible consequences.
-A number of command-line parameters are available to vary the default
+A number of command-line parameters are available to modify the default
behavior.
``-nt``
The total number of threads to use. The default, 0, will start as
many threads as available cores. Whether the threads are
- thread-MPI ranks, or OpenMP threads within such ranks depends on
+ thread-MPI ranks, and/or OpenMP threads within such ranks depends on
other settings.
``-ntmpi``
The total number of ranks to dedicate to the long-ranged
component of PME, if used. The default, -1, will dedicate ranks
only if the total number of threads is at least 12, and will use
- around one-third of the ranks for the long-ranged component.
+ around a quarter of the ranks for the long-ranged component.
``-ntomp_pme``
When using PME with separate PME ranks,
are no separate PME ranks.
``-nb``
- Can be set to "auto", "cpu", "gpu", "cpu_gpu."
+ Used to set where to execute the non-bonded interactions.
+ Can be set to "auto", "cpu", "gpu", "gpu_cpu."
Defaults to "auto," which uses a compatible GPU if available.
Setting "cpu" requires that no GPU is used. Setting "gpu" requires
that a compatible GPU be available and will be used. Setting
- "cpu_gpu" permits the CPU to execute a GPU-like code path, which
- will run slowly on the CPU and should only be used for debugging.
+ "gpu_cpu" lets the GPU compute the local and the CPU the non-local
+ non-bonded interactions. Is only faster under a narrow range of
+ conditions.
Examples for mdrun on one node
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
- gmx mpirun_mpi -np 2
+ mpirun -np 2 gmx_mpi mdrun
When using an :ref:`gmx mdrun` compiled with external MPI,
this will start two ranks and as many OpenMP threads
Running mdrun on more than one node
-----------------------------------
-This requires configuring GROMACS to build with an external MPI
-library. By default, this mdrun executable will be named
+This requires configuring |Gromacs| to build with an external MPI
+library. By default, this mdrun executable is run with
:ref:`mdrun_mpi`. All of the considerations for running single-node
mdrun still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
error, and instead the number of ranks is controlled by the
cases.
``-tunepme``
- Defaults to "on." If "on," will optimize various aspects of the
- PME and DD algorithms, shifting load between ranks and/or GPUs to
- maximize throughput
+ Defaults to "on." If "on," a Verlet-scheme simulation will
+ optimize various aspects of the PME and DD algorithms, shifting
+ load between ranks and/or GPUs to maximize throughput. Some
+ mdrun features are not compatible with this, and these ignore
+ this option.
``-dlb``
Can be set to "auto," "no," or "yes."
the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
``mdrun -gcom`` sets the number of steps that must elapse between
such communication phases, which can improve performance when
- running on a lot of nodes. Note that this means that _e.g._
+ running on a lot of ranks. Note that this means that _e.g._
temperature coupling algorithms will
- effectively remain at constant energy until the next global
- communication phase.
+ effectively remain at constant energy until the next
+ communication phase. :ref:`gmx mdrun` will always honor the
+ setting of ``mdrun -gcom``, by changing :mdp:`nstcalcenergy`,
+ :mdp:`nstenergy`, :mdp:`nstlog`, :mdp:`nsttcouple` and/or
+ :mdp:`nstpcouple` if necessary.
Note that ``-tunepme`` has more effect when there is more than one
:term:`node`, because the cost of communication for the PP and PME
::
- mpirun -np 16 gmx mdrun_mpi
+ mpirun -np 16 gmx_mpi mdrun
Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
the hardware by the MPI library, e.g. as specified
::
- mpirun -np 16 gmx mdrun_mpi -npme 5
+ mpirun -np 16 gmx_mpi mdrun -npme 5
Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
require that 5 of them are dedicated to the PME
::
- mpirun -np 11 gmx mdrun_mpi -ntomp 2 -npme 6 -ntomp_pme 1
+ mpirun -np 11 gmx_mpi mdrun -ntomp 2 -npme 6 -ntomp_pme 1
Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
require that six of them are dedicated to the PME
mpirun -np 8 gmx mdrun -ntomp 3 -gpu_id 0000
-Starts :ref:`mdrun_mpi` on a machine with two nodes, using
+Using a same/similar hardware as above,
+starts :ref:`mdrun_mpi` on a machine with two nodes, using
eight total ranks, each rank with three OpenMP threads,
and all four ranks on a node sharing GPU with ID 0.
This may or may not be faster than the previous setup
::
- mpirun -np 20 gmx mdrun_mpi -ntomp 4 -gpu_id 0
+ mpirun -np 20 gmx_mpi mdrun -ntomp 4 -gpu_id 0
Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
across ranks each to one OpenMP thread. This setup is likely to be
::
- mpirun -np 20 gmx mdrun_mpi -gpu_id 00
+ mpirun -np 20 gmx_mpi mdrun -gpu_id 00
Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
across ranks each to one OpenMP thread. This setup is likely to be
::
- mpirun -np 20 gmx mdrun_mpi -gpu_id 01
+ mpirun -np 20 gmx_mpi mdrun -gpu_id 01
Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
to be suitable when there are ten nodes, each with two
::
- mpirun -np 40 gmx mdrun_mpi -gpu_id 0011
+ mpirun -np 40 gmx_mpi mdrun -gpu_id 0011
Starts :ref:`mdrun_mpi` with 40 ranks. This setup is likely
to be suitable when there are ten nodes, each with two
Finding out how to run mdrun better
-----------------------------------
-TODO In future patch: red flags in log files, how to interpret wallcycle output
+
+The Wallcycle module is used for runtime performance measurement of :ref:`gmx mdrun`.
+At the end of the log file of each run, the "Real cycle and time accounting" section
+provides a table with runtime statistics for different parts of the :ref:`gmx mdrun` code
+in rows of the table.
+The table contains colums indicating the number of ranks and threads that
+executed the respective part of the run, wall-time and cycle
+count aggregates (across all threads and ranks) averaged over the entire run.
+The last column also shows what precentage of the total runtime each row represents.
+Note that the :ref:`gmx mdrun` timer resetting functionalities (`-resethway` and `-resetstep`)
+reset the performance counters and therefore are useful to avoid startup overhead and
+performance instability (e.g. due to load balancing) at the beginning of the run.
+
+The performance counters are:
+
+* Particle-particle during Particle mesh Ewald
+* Domain decomposition
+* Domain decomposition communication load
+* Domain decomposition communication bounds
+* Virtual site constraints
+* Send X to Particle mesh Ewald
+* Neighbor search
+* Launch GPU operations
+* Communication of coordinates
+* Born radii
+* Force
+* Waiting + Communication of force
+* Particle mesh Ewald
+* PME redist. X/F
+* PME spread/gather
+* PME 3D-FFT
+* PME 3D-FFT Communication
+* PME solve Lennard-Jones
+* PME solve Elec
+* PME wait for particle-particle
+* Wait + Receive PME force
+* Wait GPU nonlocal
+* Wait GPU local
+* Non-bonded position/force buffer operations
+* Virtual site spread
+* COM pull force
+* Write trajectory
+* Update
+* Constraints
+* Communication of energies
+* Enforced rotation
+* Add rotational forces
+* Position swapping
+* Interactive MD
+
+As performance data is collected for every run, they are essential to assessing
+and tuning the performance of :ref:`gmx mdrun` performance. Therefore, they benefit
+both code developers as well as users of the program.
+The counters are an average of the time/cycles different parts of the simulation take,
+hence can not directly reveal fluctuations during a single run (although comparisons across
+multiple runs are still very useful).
+
+Counters will appear in MD log file only if the related parts of the code were
+executed during the :ref:`gmx mdrun` run. There is also a special counter called "Rest" which
+indicated for the amount of time not accounted for by any of the counters above. Theerfore,
+a significant amount "Rest" time (more than a few percent) will often be an indication of
+parallelization inefficiency (e.g. serial code) and it is recommended to be reported to the
+developers.
+
+An additional set of subcounters can offer more fine-grained inspection of performance. They are:
+
+* Domain decomposition redistribution
+* DD neighbor search grid + sort
+* DD setup communication
+* DD make topology
+* DD make constraints
+* DD topology other
+* Neighbor search grid local
+* NS grid non-local
+* NS search local
+* NS search non-local
+* Bonded force
+* Bonded-FEP force
+* Restraints force
+* Listed buffer operations
+* Nonbonded force
+* Ewald force correction
+* Non-bonded position buffer operations
+* Non-bonded force buffer operations
+
+Subcounters are geared toward developers and have to be enabled during compilation. See
+:doc:`/dev-manual/build-system` for more information.
+
+TODO In future patch:
+- red flags in log files, how to interpret wallcycle output
+- hints to devs how to extend wallcycles
TODO In future patch: import wiki page stuff on performance checklist; maybe here,
maybe elsewhere
Running mdrun with GPUs
-----------------------
+
+NVIDIA GPUs from the professional line (Tesla or Quadro) starting with
+the Kepler generation (compute capability 3.5 and later) support changing the
+processor and memory clock frequency with the help of the applications clocks feature.
+With many workloads, using higher clock rates than the default provides significant
+performance improvements.
+For more information see the `NVIDIA blog article`_ on this topic.
+For |Gromacs| the highest application clock rates are optimal on all hardware
+available to date (up to and including Maxwell, compute capability 5.2).
+
+Application clocks can be set using the NVIDIA system managemet tool
+``nvidia-smi``. If the system permissions allow, :ref:`gmx mdrun` has
+built-in support to set application clocks if built with NVML support. # TODO add ref to relevant section
+Note that application clocks are a global setting, hence affect the
+performance of all applications that use the respective GPU(s).
+For this reason, :ref:`gmx mdrun` sets application clocks at initialization
+to the values optimal for |Gromacs| and it restores them before exiting
+to the values found at startup, unless it detects that they were altered
+during its runtime.
+
+.. _NVIDIA blog article: https://devblogs.nvidia.com/parallelforall/increase-performance-gpu-boost-k80-autoboost/
+
+Reducing overheads in GPU accelerated runs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In order for CPU cores and GPU(s) to execute concurrently, tasks are
+launched and executed asynchronously on the GPU(s) while the CPU cores
+execute non-offloaded force computation (like long-range PME electrostatics).
+Asynchronous task launches are handled by GPU device driver and
+require CPU involvement. Therefore, the work of scheduling
+GPU tasks will incur an overhead that can in some cases significantly
+delay or interfere with the CPU execution.
+
+Delays in CPU execution are caused by the latency of launching GPU tasks,
+an overhead that can become significant as simulation ns/day increases
+(i.e. with shorter wall-time per step).
+The overhead is measured by :ref:`gmx mdrun` and reported in the performance
+summary section of the log file ("Launch GPU ops" row).
+A few percent of runtime spent in this category is normal,
+but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed.
+In general, there a user can do little to avoid such overheads, but there
+are a few cases where tweaks can give performance benefits.
+In single-rank runs timing of GPU tasks is by default enabled and,
+while in most cases its impact is small, in fast runs performance can be affected.
+The performance impact will be most significant on NVIDIA GPUs with CUDA,
+less on AMD with OpenCL.
+In these cases, when more than a few percent of "Launch GPU ops" time is observed,
+it is recommended turning off timing by setting the ``GMX_DISABLE_GPU_TIMING``
+environment variable.
+In parallel runs with with many ranks sharing a GPU
+launch overheads can also be reduced by staring fewer thread-MPI
+or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.
+
+The second type of overhead, interference of the GPU driver with CPU computation,
+is caused by the scheduling and coordination of GPU tasks.
+A separate GPU driver thread can require CPU resources
+which may clash with the concurrently running non-offloaded tasks,
+potentially degrading the performance of PME or bonded force computation.
+This effect is most pronounced when using AMD GPUs with OpenCL with
+all stable driver releases to date (up to and including fglrx 12.15).
+To minimize the overhead it is recommended to
+leave a CPU hardware thread unused when launching :ref:`gmx mdrun`,
+especially on CPUs with high core count and/or HyperThreading enabled.
+E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU,
+try ``gmx mdrun -ntomp 7 -pin on``.
+This will leave free CPU resources for the GPU task scheduling
+reducing interference with CPU computation.
+Note that assigning fewer resources to :ref:`gmx mdrun` CPU computation
+involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead,
+in particular without HyperThreading and with few CPU cores.
+
TODO In future patch: any tips not covered above
Running the OpenCL version of mdrun
-----------------------------------
The current version works with GCN-based AMD GPUs, and NVIDIA CUDA
-GPUs. Make sure that you have the latest drivers installed. The
-minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
+GPUs. Make sure that you have the latest drivers installed. For AMD GPUs,
+Mesa version 17.0 or newer with LLVM 4.0 or newer is supported in addition
+to the proprietary driver. For NVIDIA GPUs, using the proprietary driver is
+required as the open source nouveau driver (available in Mesa) does not
+provide the OpenCL support.
+The minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
also the :ref:`known limitations <opencl-known-limitations>`.
+Devices from the AMD GCN architectures (all series) and NVIDIA Fermi
+and later (compute capability 2.0) are known to work, but before
+doing production runs always make sure that the |Gromacs| tests
+pass successfully on the hardware.
+
+The OpenCL GPU kernels are compiled at run time. Hence,
+building the OpenCL program can take a few seconds introducing a slight
+delay in the :ref:`gmx mdrun` startup. This is not normally a
+problem for long production MD, but you might prefer to do some kinds
+of work, e.g. that runs very few steps, on just the CPU (e.g. see ``-nb`` above).
+
The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
used to select CUDA devices, or to define a mapping of GPUs to PP
ranks, is used for OpenCL devices.
-The following devices are known to work correctly:
- - AMD: FirePro W5100, HD 7950, FirePro W9100, Radeon R7 240,
- Radeon R7 M260, Radeon R9 290
- - NVIDIA: GeForce GTX 660M, GeForce GTX 660Ti, GeForce GTX 750Ti,
- GeForce GTX 780, GTX Titan
-
-Building the OpenCL program can take a few seconds when :ref:`gmx
-mdrun` starts up, because the kernels that run on the
-GPU can only be compiled at run time. This is not normally a
-problem for long production MD, but you might prefer to do some kinds
-of work on just the CPU (e.g. see ``-nb`` above).
-
Some other :ref:`OpenCL management <opencl-management>` environment
variables may be of interest to developers.
Limitations in the current OpenCL support of interest to |Gromacs| users:
-- Using more than one GPU on a node is supported only with thread MPI
-- Sharing a GPU between multiple PP ranks is not supported
- No Intel devices (CPUs, GPUs or Xeon Phi) are supported
- Due to blocking behavior of some asynchronous task enqueuing functions
in the NVIDIA OpenCL runtime, with the affected driver versions there is
almost no performance gain when using NVIDIA GPUs.
The issue affects NVIDIA driver versions up to 349 series, but it
known to be fixed 352 and later driver releases.
+- On NVIDIA GPUs the OpenCL kernels achieve much lower performance
+ than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL
+ compiler.
+- The AMD APPSDK version 3.0 ships with OpenCL compiler/runtime components,
+ libamdocl12cl64.so and libamdocl64.so (only in earlier releases),
+ that conflict with newer fglrx GPU drivers which provide the same libraries.
+ This conflict manifests in kernel launch failures as, due to the library path
+ setup, the OpenCL runtime loads the APPSDK version of the aforementioned
+ libraries instead of the ones provided by the driver installer.
+ The recommended workaround is to remove or rename the APPSDK versions of the
+ offending libraries.
Limitations of interest to |Gromacs| developers:
multiple of 32
- Some Ewald tabulated kernels are known to produce incorrect results, so
(correct) analytical kernels are used instead.
+
+Performance checklist
+---------------------
+
+There are many different aspects that affect the performance of simulations in
+|Gromacs|. Most simulations require a lot of computational resources, therefore
+it can be worthwhile to optimize the use of those resources. Several issues
+mentioned in the list below could lead to a performance difference of a factor
+of 2. So it can be useful go through the checklist.
+
+|Gromacs| configuration
+^^^^^^^^^^^^^^^^^^^^^^^
+
+* Don't use double precision unless you're absolute sure you need it.
+* Compile the FFTW library (yourself) with the correct flags on x86 (in most
+ cases, the correct flags are automatically configured).
+* On x86, use gcc or icc as the compiler (not pgi or the Cray compiler).
+* On POWER, use gcc instead of IBM's xlc.
+* Use a new compiler version, especially for gcc (e.g. from the version 5 to 6
+ the performance of the compiled code improved a lot).
+* MPI library: OpenMPI usually has good performance and causes little trouble.
+* Make sure your compiler supports OpenMP (some versions of Clang don't).
+* If you have GPUs that support either CUDA or OpenCL, use them.
+
+ * Configure with ``-DGMX_GPU=ON`` (add ``-DGMX_USE_OPENCL=ON`` for OpenCL).
+ * For CUDA, use the newest CUDA availabe for your GPU to take advantage of the
+ latest performance enhancements.
+ * Use a recent GPU driver.
+ * If compiling on a cluster head node, make sure that ``GMX_CPU_ACCELERATION``
+ is appropriate for the compute nodes.
+
+Run setup
+^^^^^^^^^
+
+* For an approximately spherical solute, use a rhombic dodecahedron unit cell.
+* When using a time-step of 2 fs, use :mdp:`cutoff-scheme` = :mdp:`h-bonds`
+ (and not :mdp:`all-bonds`), since this is faster, especially with GPUs,
+ and most force fields have been parametrized with only bonds involving
+ hydrogens constrained.
+* You can increase the time-step to 4 or 5 fs when using virtual interaction
+ sites (``gmx pdb2gmx -vsite h``).
+* For massively parallel runs with PME, you might need to try different numbers
+ of PME ranks (``gmx mdrun -npme ???``) to achieve best performance;
+ ``gmx tune_pme`` can help automate this search.
+* For massively parallel runs (also ``gmx mdrun -multidir``), or with a slow
+ network, global communication can become a bottleneck and you can reduce it
+ with ``gmx mdrun -gcom`` (note that this does affect the frequency of
+ temperature and pressure coupling).
+
+Checking and improving performance
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* Look at the end of the ``md.log`` file to see the performance and the cycle
+ counters and wall-clock time for different parts of the MD calculation. The
+ PP/PME load ratio is also printed, with a warning when a lot of performance is
+ lost due to imbalance.
+* Adjust the number of PME ranks and/or the cut-off and PME grid-spacing when
+ there is a large PP/PME imbalance. Note that even with a small reported
+ imbalance, the automated PME-tuning might have reduced the initial imbalance.
+ You could still gain performance by changing the mdp parameters or increasing
+ the number of PME ranks.
+* If the neighbor searching takes a lot of time, increase nstlist (with the
+ Verlet cut-off scheme, this automatically adjusts the size of the neighbour
+ list to do more non-bonded computation to keep energy drift constant).
+
+ * If ``Comm. energies`` takes a lot of time (a note will be printed in the log
+ file), increase nstcalcenergy or use ``mdrun -gcom``.
+ * If all communication takes a lot of time, you might be running on too many
+ cores, or you could try running combined MPI/OpenMP parallelization with 2
+ or 4 OpenMP threads per MPI process.
/*
* This file is part of the GROMACS molecular simulation package.
*
- * Copyright (c) 2012,2013,2014,2015,2016, by the GROMACS development team, led by
+ * Copyright (c) 2012,2013,2014,2015,2016,2017, by the GROMACS development team, led by
* Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
* and including many others, as listed in the AUTHORS file in the
* top-level source directory and at http://www.gromacs.org.
#include "gromacs/domdec/domdec.h"
#include "gromacs/domdec/domdec_network.h"
-#include "gromacs/legacyheaders/calcgrid.h"
-#include "gromacs/legacyheaders/force.h"
-#include "gromacs/legacyheaders/md_logging.h"
-#include "gromacs/legacyheaders/network.h"
-#include "gromacs/legacyheaders/sim_util.h"
-#include "gromacs/legacyheaders/types/commrec.h"
+#include "gromacs/domdec/domdec_struct.h"
+#include "gromacs/fft/calcgrid.h"
+#include "gromacs/gmxlib/md_logging.h"
+#include "gromacs/gmxlib/network.h"
+#include "gromacs/math/functions.h"
#include "gromacs/math/vec.h"
+#include "gromacs/mdlib/forcerec.h"
#include "gromacs/mdlib/nbnxn_gpu_data_mgmt.h"
+#include "gromacs/mdlib/sim_util.h"
+#include "gromacs/mdtypes/commrec.h"
+#include "gromacs/mdtypes/inputrec.h"
+#include "gromacs/mdtypes/md_enums.h"
#include "gromacs/pbcutil/pbc.h"
#include "gromacs/timing/wallcycle.h"
#include "gromacs/utility/cstringutil.h"
+#include "gromacs/utility/fatalerror.h"
#include "gromacs/utility/gmxassert.h"
#include "gromacs/utility/smalloc.h"
struct pme_setup_t {
real rcut_coulomb; /**< Coulomb cut-off */
real rlist; /**< pair-list cut-off */
- real rlistlong; /**< LR pair-list cut-off */
- int nstcalclr; /**< frequency of evaluating long-range forces for group scheme */
real spacing; /**< (largest) PME grid spacing */
ivec grid; /**< the PME grid dimensions */
real grid_efficiency; /**< ineffiency factor for non-uniform grids <= 1 */
real cut_spacing; /**< the minimum cutoff / PME grid spacing ratio */
real rcut_vdw; /**< Vdw cutoff (does not change) */
real rcut_coulomb_start; /**< Initial electrostatics cutoff */
- int nstcalclr_start; /**< Initial electrostatics cutoff */
real rbuf_coulomb; /**< the pairlist buffer size */
real rbuf_vdw; /**< the pairlist buffer size */
matrix box_start; /**< the initial simulation box */
gmx_bool bUseGPU,
gmx_bool *bPrinting)
{
+ GMX_RELEASE_ASSERT(ir->cutoff_scheme != ecutsGROUP, "PME tuning is not supported with cutoff-scheme=group (because it contains bugs)");
+
pme_load_balancing_t *pme_lb;
real spm, sp;
int d;
+ // Note that we don't (yet) support PME load balancing with LJ-PME only.
+ GMX_RELEASE_ASSERT(EEL_PME(ir->coulombtype), "pme_loadbal_init called without PME electrostatics");
+ // To avoid complexity, we require a single cut-off with PME for q+LJ.
+ // This is checked by grompp, but it doesn't hurt to check again.
+ GMX_RELEASE_ASSERT(!(EEL_PME(ir->coulombtype) && EVDW_PME(ir->vdwtype) && ir->rcoulomb != ir->rvdw), "With Coulomb and LJ PME, rcoulomb should be equal to rvdw");
+
snew(pme_lb, 1);
pme_lb->bSepPMERanks = !(cr->duty & DUTY_PME);
pme_lb->cutoff_scheme = ir->cutoff_scheme;
- if (pme_lb->cutoff_scheme == ecutsVERLET)
- {
- pme_lb->rbuf_coulomb = ic->rlist - ic->rcoulomb;
- pme_lb->rbuf_vdw = pme_lb->rbuf_coulomb;
- }
- else
- {
- if (ic->rcoulomb > ic->rlist)
- {
- pme_lb->rbuf_coulomb = ic->rlistlong - ic->rcoulomb;
- }
- else
- {
- pme_lb->rbuf_coulomb = ic->rlist - ic->rcoulomb;
- }
- if (ic->rvdw > ic->rlist)
- {
- pme_lb->rbuf_vdw = ic->rlistlong - ic->rvdw;
- }
- else
- {
- pme_lb->rbuf_vdw = ic->rlist - ic->rvdw;
- }
- }
+ pme_lb->rbuf_coulomb = ic->rlist - ic->rcoulomb;
+ pme_lb->rbuf_vdw = ic->rlist - ic->rvdw;
copy_mat(box, pme_lb->box_start);
if (ir->ePBC == epbcXY && ir->nwall == 2)
pme_lb->rcut_vdw = ic->rvdw;
pme_lb->rcut_coulomb_start = ir->rcoulomb;
- pme_lb->nstcalclr_start = ir->nstcalclr;
pme_lb->cur = 0;
pme_lb->setup[0].rcut_coulomb = ic->rcoulomb;
pme_lb->setup[0].rlist = ic->rlist;
- pme_lb->setup[0].rlistlong = ic->rlistlong;
- pme_lb->setup[0].nstcalclr = ir->nstcalclr;
pme_lb->setup[0].grid[XX] = ir->nkx;
pme_lb->setup[0].grid[YY] = ir->nky;
pme_lb->setup[0].grid[ZZ] = ir->nkz;
pme_lb->cycles_n = 0;
pme_lb->cycles_c = 0;
+ if (!wallcycle_have_counter())
+ {
+ md_print_warn(cr, fp_log, "NOTE: Cycle counters unsupported or not enabled in kernel. Cannot use PME-PP balancing.\n");
+ }
+
/* Tune with GPUs and/or separate PME ranks.
* When running only on a CPU without PME ranks, PME tuning will only help
* with small numbers of atoms in the cut-off sphere.
if (pme_lb->cutoff_scheme == ecutsVERLET)
{
- set->rlist = set->rcut_coulomb + pme_lb->rbuf_coulomb;
- /* We dont use LR lists with Verlet, but this avoids if-statements in further checks */
- set->rlistlong = set->rlist;
+ /* Never decrease the Coulomb and VdW list buffers */
+ set->rlist = std::max(set->rcut_coulomb + pme_lb->rbuf_coulomb,
+ pme_lb->rcut_vdw + pme_lb->rbuf_vdw);
}
else
{
+ /* TODO Remove these lines and pme_lb->cutoff_scheme */
tmpr_coulomb = set->rcut_coulomb + pme_lb->rbuf_coulomb;
tmpr_vdw = pme_lb->rcut_vdw + pme_lb->rbuf_vdw;
+ /* Two (known) bugs with cutoff-scheme=group here:
+ * - This modification of rlist results in incorrect DD comunication.
+ * - We should set fr->bTwinRange = (fr->rlistlong > fr->rlist).
+ */
set->rlist = std::min(tmpr_coulomb, tmpr_vdw);
- set->rlistlong = std::max(tmpr_coulomb, tmpr_vdw);
-
- /* Set the long-range update frequency */
- if (set->rlist == set->rlistlong)
- {
- /* No long-range interactions if the short-/long-range cutoffs are identical */
- set->nstcalclr = 0;
- }
- else if (pme_lb->nstcalclr_start == 0 || pme_lb->nstcalclr_start == 1)
- {
- /* We were not doing long-range before, but now we are since rlist!=rlistlong */
- set->nstcalclr = 1;
- }
- else
- {
- /* We were already doing long-range interactions from the start */
- if (pme_lb->rcut_vdw > pme_lb->rcut_coulomb_start)
- {
- /* We were originally doing long-range VdW-only interactions.
- * If rvdw is still longer than rcoulomb we keep the original nstcalclr,
- * but if the coulomb cutoff has become longer we should update the long-range
- * part every step.
- */
- set->nstcalclr = (tmpr_vdw > tmpr_coulomb) ? pme_lb->nstcalclr_start : 1;
- }
- else
- {
- /* We were not doing any long-range interaction from the start,
- * since it is not possible to do twin-range coulomb for the PME interaction.
- */
- set->nstcalclr = 1;
- }
- }
}
set->spacing = sp;
if (fp_err != NULL)
{
fprintf(fp_err, "\r%s\n", buf);
+ fflush(fp_err);
}
if (fp_log != NULL)
{
if (fp_err != NULL)
{
fprintf(fp_err, "\r%s\n", buf);
+ fflush(fp_err);
}
if (fp_log != NULL)
{
t_commrec *cr,
FILE *fp_err,
FILE *fp_log,
- t_inputrec *ir,
+ const t_inputrec *ir,
t_state *state,
double cycles,
interaction_const_t *ic,
set = &pme_lb->setup[pme_lb->cur];
set->count++;
- rtab = ir->rlistlong + ir->tabext;
+ rtab = ir->rlist + ir->tabext;
if (set->count % 2 == 1)
{
* better overal performance can be obtained with a slightly
* shorter cut-off and better DD load balancing.
*/
- set_dd_dlb_max_cutoff(cr, pme_lb->setup[pme_lb->fastest].rlistlong);
+ set_dd_dlb_max_cutoff(cr, pme_lb->setup[pme_lb->fastest].rlist);
}
}
cycles_fast = pme_lb->setup[pme_lb->fastest].cycles;
if (OK && ir->ePBC != epbcNONE)
{
- OK = (sqr(pme_lb->setup[pme_lb->cur+1].rlistlong)
+ OK = (gmx::square(pme_lb->setup[pme_lb->cur+1].rlist)
<= max_cutoff2(ir->ePBC, state->box));
if (!OK)
{
if (DOMAINDECOMP(cr))
{
OK = change_dd_cutoff(cr, state, ir,
- pme_lb->setup[pme_lb->cur].rlistlong);
+ pme_lb->setup[pme_lb->cur].rlist);
if (!OK)
{
/* Failed: do not use this setup */
if (DOMAINDECOMP(cr) && pme_lb->stage > 0)
{
- OK = change_dd_cutoff(cr, state, ir, pme_lb->setup[pme_lb->cur].rlistlong);
+ OK = change_dd_cutoff(cr, state, ir, pme_lb->setup[pme_lb->cur].rlist);
if (!OK)
{
/* For some reason the chosen cut-off is incompatible with DD.
ic->rcoulomb = set->rcut_coulomb;
ic->rlist = set->rlist;
- ic->rlistlong = set->rlistlong;
- ir->nstcalclr = set->nstcalclr;
ic->ewaldcoeff_q = set->ewaldcoeff_q;
/* TODO: centralize the code that sets the potentials shifts */
if (ic->coulomb_modifier == eintmodPOTSHIFT)
{
- ic->sh_ewald = std::erfc(ic->ewaldcoeff_q*ic->rcoulomb);
+ GMX_RELEASE_ASSERT(ic->rcoulomb != 0, "Cutoff radius cannot be zero");
- ic->sh_ewald = gmx_erfc(ic->ewaldcoeff_q*ic->rcoulomb) / ic->rcoulomb;
++ ic->sh_ewald = std::erfc(ic->ewaldcoeff_q*ic->rcoulomb) / ic->rcoulomb;
}
if (EVDW_PME(ic->vdwtype))
{
{
real crc2;
- ic->dispersion_shift.cpot = -std::pow(static_cast<double>(ic->rvdw), -6.0);
- ic->repulsion_shift.cpot = -std::pow(static_cast<double>(ic->rvdw), -12.0);
+ ic->dispersion_shift.cpot = -1.0/gmx::power6(static_cast<double>(ic->rvdw));
+ ic->repulsion_shift.cpot = -1.0/gmx::power12(static_cast<double>(ic->rvdw));
ic->sh_invrc6 = -ic->dispersion_shift.cpot;
- crc2 = sqr(ic->ewaldcoeff_lj*ic->rvdw);
- ic->sh_lj_ewald = (exp(-crc2)*(1 + crc2 + 0.5*crc2*crc2) - 1)*std::pow(static_cast<double>(ic->rvdw), -6.0);
+ crc2 = gmx::square(ic->ewaldcoeff_lj*ic->rvdw);
+ ic->sh_lj_ewald = (std::exp(-crc2)*(1 + crc2 + 0.5*crc2*crc2) - 1)/gmx::power6(ic->rvdw);
}
}
* texture objects are used), but as this is initialization code, there
* is not point in complicating things.
*/
-#ifdef GMX_THREAD_MPI
+#if GMX_THREAD_MPI
if (PAR(cr) && use_GPU(nbv))
{
gmx_barrier(cr);
t_commrec *cr,
FILE *fp_err,
FILE *fp_log,
- t_inputrec *ir,
+ const t_inputrec *ir,
t_forcerec *fr,
t_state *state,
gmx_wallcycle_t wcycle,
* This also ensures that we won't disable the currently
* optimal setting during a second round of PME balancing.
*/
- set_dd_dlb_max_cutoff(cr, fr->ic->rlistlong);
+ set_dd_dlb_max_cutoff(cr, fr->ic->rlist);
}
}
fr->ewaldcoeff_q = fr->ic->ewaldcoeff_q;
fr->ewaldcoeff_lj = fr->ic->ewaldcoeff_lj;
fr->rlist = fr->ic->rlist;
- fr->rlistlong = fr->ic->rlistlong;
fr->rcoulomb = fr->ic->rcoulomb;
fr->rvdw = fr->ic->rvdw;
return setup->grid[XX]*setup->grid[YY]*setup->grid[ZZ];
}
-/*! \brief Retuern the largest short-range list cut-off radius */
-static real pme_loadbal_rlist(const pme_setup_t *setup)
-{
- /* With the group cut-off scheme we can have twin-range either
- * for Coulomb or for VdW, so we need a check here.
- * With the Verlet cut-off scheme rlist=rlistlong.
- */
- if (setup->rcut_coulomb > setup->rlist)
- {
- return setup->rlistlong;
- }
- else
- {
- return setup->rlist;
- }
-}
-
/*! \brief Print one load-balancing setting */
static void print_pme_loadbal_setting(FILE *fplog,
const char *name,
fprintf(fplog,
" %-7s %6.3f nm %6.3f nm %3d %3d %3d %5.3f nm %5.3f nm\n",
name,
- setup->rcut_coulomb, pme_loadbal_rlist(setup),
+ setup->rcut_coulomb, setup->rlist,
setup->grid[XX], setup->grid[YY], setup->grid[ZZ],
setup->spacing, 1/setup->ewaldcoeff_q);
}
double pp_ratio, grid_ratio;
real pp_ratio_temporary;
- pp_ratio_temporary = pme_loadbal_rlist(&pme_lb->setup[pme_lb->cur])/pme_loadbal_rlist(&pme_lb->setup[0]);
- pp_ratio = std::pow(static_cast<double>(pp_ratio_temporary), 3.0);
+ pp_ratio_temporary = pme_lb->setup[pme_lb->cur].rlist / pme_lb->setup[0].rlist;
+ pp_ratio = gmx::power3(pp_ratio_temporary);
grid_ratio = pme_grid_points(&pme_lb->setup[pme_lb->cur])/
(double)pme_grid_points(&pme_lb->setup[0]);
*
* Copyright (c) 1991-2000, University of Groningen, The Netherlands.
* Copyright (c) 2001-2004, The GROMACS development team.
- * Copyright (c) 2013,2014,2015,2016, by the GROMACS development team, led by
+ * Copyright (c) 2013,2014,2015,2016,2017, by the GROMACS development team, led by
* Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
* and including many others, as listed in the AUTHORS file in the
* top-level source directory and at http://www.gromacs.org.
*/
#include "gmxpre.h"
+#include "forcerec.h"
+
#include "config.h"
#include <assert.h>
-#include <math.h>
#include <stdlib.h>
#include <string.h>
+#include <cmath>
+
#include <algorithm>
+#include "gromacs/commandline/filenm.h"
#include "gromacs/domdec/domdec.h"
+#include "gromacs/domdec/domdec_struct.h"
#include "gromacs/ewald/ewald.h"
-#include "gromacs/fileio/filenm.h"
-#include "gromacs/gmxlib/gpu_utils/gpu_utils.h"
-#include "gromacs/legacyheaders/copyrite.h"
-#include "gromacs/legacyheaders/force.h"
-#include "gromacs/legacyheaders/gmx_detect_hardware.h"
-#include "gromacs/legacyheaders/gmx_omp_nthreads.h"
-#include "gromacs/legacyheaders/inputrec.h"
-#include "gromacs/legacyheaders/macros.h"
-#include "gromacs/legacyheaders/md_logging.h"
-#include "gromacs/legacyheaders/md_support.h"
-#include "gromacs/legacyheaders/names.h"
-#include "gromacs/legacyheaders/network.h"
-#include "gromacs/legacyheaders/nonbonded.h"
-#include "gromacs/legacyheaders/ns.h"
-#include "gromacs/legacyheaders/qmmm.h"
-#include "gromacs/legacyheaders/tables.h"
-#include "gromacs/legacyheaders/txtdump.h"
-#include "gromacs/legacyheaders/typedefs.h"
-#include "gromacs/legacyheaders/types/commrec.h"
+#include "gromacs/fileio/filetypes.h"
+#include "gromacs/gmxlib/md_logging.h"
+#include "gromacs/gmxlib/network.h"
+#include "gromacs/gmxlib/nonbonded/nonbonded.h"
+#include "gromacs/gpu_utils/gpu_utils.h"
+#include "gromacs/hardware/detecthardware.h"
#include "gromacs/listed-forces/manage-threading.h"
+#include "gromacs/listed-forces/pairs.h"
#include "gromacs/math/calculate-ewald-splitting-coefficient.h"
+#include "gromacs/math/functions.h"
#include "gromacs/math/units.h"
#include "gromacs/math/utilities.h"
#include "gromacs/math/vec.h"
+#include "gromacs/mdlib/force.h"
#include "gromacs/mdlib/forcerec-threading.h"
+#include "gromacs/mdlib/gmx_omp_nthreads.h"
+#include "gromacs/mdlib/md_support.h"
#include "gromacs/mdlib/nb_verlet.h"
#include "gromacs/mdlib/nbnxn_atomdata.h"
#include "gromacs/mdlib/nbnxn_gpu_data_mgmt.h"
#include "gromacs/mdlib/nbnxn_search.h"
#include "gromacs/mdlib/nbnxn_simd.h"
+#include "gromacs/mdlib/nbnxn_util.h"
+#include "gromacs/mdlib/ns.h"
+#include "gromacs/mdlib/qmmm.h"
+#include "gromacs/mdlib/sim_util.h"
+#include "gromacs/mdtypes/commrec.h"
+#include "gromacs/mdtypes/fcdata.h"
+#include "gromacs/mdtypes/group.h"
+#include "gromacs/mdtypes/inputrec.h"
+#include "gromacs/mdtypes/md_enums.h"
#include "gromacs/pbcutil/ishift.h"
#include "gromacs/pbcutil/pbc.h"
#include "gromacs/simd/simd.h"
+#include "gromacs/tables/forcetable.h"
#include "gromacs/topology/mtop_util.h"
+#include "gromacs/trajectory/trajectoryframe.h"
+#include "gromacs/utility/cstringutil.h"
#include "gromacs/utility/exceptions.h"
#include "gromacs/utility/fatalerror.h"
#include "gromacs/utility/gmxassert.h"
+#include "gromacs/utility/pleasecite.h"
#include "gromacs/utility/smalloc.h"
#include "gromacs/utility/stringutil.h"
#include "nbnxn_gpu_jit_support.h"
+const char *egrp_nm[egNR+1] = {
+ "Coul-SR", "LJ-SR", "Buck-SR",
+ "Coul-14", "LJ-14", NULL
+};
+
t_forcerec *mk_forcerec(void)
{
t_forcerec *fr;
int i, j, k, atnr;
real c6, c6i, c6j, c12i, c12j, epsi, epsj, sigmai, sigmaj;
real *grid;
- const real oneOverSix = 1.0 / 6.0;
/* For LJ-PME simulations, we correct the energies with the reciprocal space
* inside of the cut-off. To do this the non-bonded kernels needs to have
c12i = idef->iparams[i*(atnr+1)].lj.c12;
c6j = idef->iparams[j*(atnr+1)].lj.c6;
c12j = idef->iparams[j*(atnr+1)].lj.c12;
- c6 = sqrt(c6i * c6j);
+ c6 = std::sqrt(c6i * c6j);
if (fr->ljpme_combination_rule == eljpmeLB
&& !gmx_numzero(c6) && !gmx_numzero(c12i) && !gmx_numzero(c12j))
{
- sigmai = pow(c12i / c6i, oneOverSix);
- sigmaj = pow(c12j / c6j, oneOverSix);
+ sigmai = gmx::sixthroot(c12i / c6i);
+ sigmaj = gmx::sixthroot(c12j / c6j);
epsi = c6i * c6i / c12i;
epsj = c6j * c6j / c12j;
- c6 = sqrt(epsi * epsj) * pow(0.5*(sigmai+sigmaj), 6);
+ c6 = std::sqrt(epsi * epsj) * gmx::power6(0.5*(sigmai+sigmaj));
}
/* Store the elements at the same relative positions as C6 in nbfp in order
* to simplify access in the kernels
int i, j, atnr;
real c6i, c6j, c12i, c12j, epsi, epsj, sigmai, sigmaj;
real c6, c12;
- const real oneOverSix = 1.0 / 6.0;
atnr = idef->atnr;
snew(nbfp, 2*atnr*atnr);
c12i = idef->iparams[i*(atnr+1)].lj.c12;
c6j = idef->iparams[j*(atnr+1)].lj.c6;
c12j = idef->iparams[j*(atnr+1)].lj.c12;
- c6 = sqrt(c6i * c6j);
- c12 = sqrt(c12i * c12j);
+ c6 = std::sqrt(c6i * c6j);
+ c12 = std::sqrt(c12i * c12j);
if (comb_rule == eCOMB_ARITHMETIC
&& !gmx_numzero(c6) && !gmx_numzero(c12))
{
- sigmai = pow(c12i / c6i, oneOverSix);
- sigmaj = pow(c12j / c6j, oneOverSix);
+ sigmai = gmx::sixthroot(c12i / c6i);
+ sigmaj = gmx::sixthroot(c12j / c6j);
epsi = c6i * c6i / c12i;
epsj = c6j * c6j / c12j;
- c6 = sqrt(epsi * epsj) * pow(0.5*(sigmai+sigmaj), 6);
- c12 = sqrt(epsi * epsj) * pow(0.5*(sigmai+sigmaj), 12);
+ c6 = std::sqrt(epsi * epsj) * gmx::power6(0.5*(sigmai+sigmaj));
+ c12 = std::sqrt(epsi * epsj) * gmx::power12(0.5*(sigmai+sigmaj));
}
C6(nbfp, atnr, i, j) = c6*6.0;
C12(nbfp, atnr, i, j) = c12*12.0;
}
}
-static void make_nbf_tables(FILE *fp, const output_env_t oenv,
+static void make_nbf_tables(FILE *fp,
t_forcerec *fr, real rtab,
- const t_commrec *cr,
const char *tabfn, char *eg1, char *eg2,
t_nblists *nbl)
{
sprintf(buf + strlen(tabfn) - strlen(ftp2ext(efXVG)) - 1, "_%s_%s.%s",
eg1, eg2, ftp2ext(efXVG));
}
- nbl->table_elec_vdw = make_tables(fp, oenv, fr, MASTER(cr), buf, rtab, 0);
+ nbl->table_elec_vdw = make_tables(fp, fr, buf, rtab, 0);
/* Copy the contents of the table to separate coulomb and LJ tables too,
* to improve cache performance.
*/
* the table data to be aligned to 16-byte. The pointers could be freed
* but currently aren't.
*/
- nbl->table_elec.interaction = GMX_TABLE_INTERACTION_ELEC;
- nbl->table_elec.format = nbl->table_elec_vdw.format;
- nbl->table_elec.r = nbl->table_elec_vdw.r;
- nbl->table_elec.n = nbl->table_elec_vdw.n;
- nbl->table_elec.scale = nbl->table_elec_vdw.scale;
- nbl->table_elec.scale_exp = nbl->table_elec_vdw.scale_exp;
- nbl->table_elec.formatsize = nbl->table_elec_vdw.formatsize;
- nbl->table_elec.ninteractions = 1;
- nbl->table_elec.stride = nbl->table_elec.formatsize * nbl->table_elec.ninteractions;
- snew_aligned(nbl->table_elec.data, nbl->table_elec.stride*(nbl->table_elec.n+1), 32);
-
- nbl->table_vdw.interaction = GMX_TABLE_INTERACTION_VDWREP_VDWDISP;
- nbl->table_vdw.format = nbl->table_elec_vdw.format;
- nbl->table_vdw.r = nbl->table_elec_vdw.r;
- nbl->table_vdw.n = nbl->table_elec_vdw.n;
- nbl->table_vdw.scale = nbl->table_elec_vdw.scale;
- nbl->table_vdw.scale_exp = nbl->table_elec_vdw.scale_exp;
- nbl->table_vdw.formatsize = nbl->table_elec_vdw.formatsize;
- nbl->table_vdw.ninteractions = 2;
- nbl->table_vdw.stride = nbl->table_vdw.formatsize * nbl->table_vdw.ninteractions;
- snew_aligned(nbl->table_vdw.data, nbl->table_vdw.stride*(nbl->table_vdw.n+1), 32);
-
- for (i = 0; i <= nbl->table_elec_vdw.n; i++)
+ snew(nbl->table_elec, 1);
+ nbl->table_elec->interaction = GMX_TABLE_INTERACTION_ELEC;
+ nbl->table_elec->format = nbl->table_elec_vdw->format;
+ nbl->table_elec->r = nbl->table_elec_vdw->r;
+ nbl->table_elec->n = nbl->table_elec_vdw->n;
+ nbl->table_elec->scale = nbl->table_elec_vdw->scale;
+ nbl->table_elec->formatsize = nbl->table_elec_vdw->formatsize;
+ nbl->table_elec->ninteractions = 1;
+ nbl->table_elec->stride = nbl->table_elec->formatsize * nbl->table_elec->ninteractions;
+ snew_aligned(nbl->table_elec->data, nbl->table_elec->stride*(nbl->table_elec->n+1), 32);
+
+ snew(nbl->table_vdw, 1);
+ nbl->table_vdw->interaction = GMX_TABLE_INTERACTION_VDWREP_VDWDISP;
+ nbl->table_vdw->format = nbl->table_elec_vdw->format;
+ nbl->table_vdw->r = nbl->table_elec_vdw->r;
+ nbl->table_vdw->n = nbl->table_elec_vdw->n;
+ nbl->table_vdw->scale = nbl->table_elec_vdw->scale;
+ nbl->table_vdw->formatsize = nbl->table_elec_vdw->formatsize;
+ nbl->table_vdw->ninteractions = 2;
+ nbl->table_vdw->stride = nbl->table_vdw->formatsize * nbl->table_vdw->ninteractions;
+ snew_aligned(nbl->table_vdw->data, nbl->table_vdw->stride*(nbl->table_vdw->n+1), 32);
+
+ for (i = 0; i <= nbl->table_elec_vdw->n; i++)
{
for (j = 0; j < 4; j++)
{
- nbl->table_elec.data[4*i+j] = nbl->table_elec_vdw.data[12*i+j];
+ nbl->table_elec->data[4*i+j] = nbl->table_elec_vdw->data[12*i+j];
}
for (j = 0; j < 8; j++)
{
- nbl->table_vdw.data[8*i+j] = nbl->table_elec_vdw.data[12*i+4+j];
+ nbl->table_vdw->data[8*i+j] = nbl->table_elec_vdw->data[12*i+4+j];
}
}
}
if (fr->natoms_force_constr > fr->nalloc_force)
{
fr->nalloc_force = over_alloc_dd(fr->natoms_force_constr);
-
- if (fr->bTwinRange)
- {
- srenew(fr->f_twin, fr->nalloc_force);
- }
}
if (fr->bF_NoVirSum)
return cutoff;
}
-static void make_adress_tf_tables(FILE *fp, const output_env_t oenv,
- t_forcerec *fr, const t_inputrec *ir,
- const char *tabfn, const gmx_mtop_t *mtop,
- matrix box)
-{
- char buf[STRLEN];
- int i, j;
-
- if (tabfn == NULL)
- {
- gmx_fatal(FARGS, "No thermoforce table file given. Use -tabletf to specify a file\n");
- return;
- }
-
- snew(fr->atf_tabs, ir->adress->n_tf_grps);
-
- sprintf(buf, "%s", tabfn);
- for (i = 0; i < ir->adress->n_tf_grps; i++)
- {
- j = ir->adress->tf_table_index[i]; /* get energy group index */
- sprintf(buf + strlen(tabfn) - strlen(ftp2ext(efXVG)) - 1, "tf_%s.%s",
- *(mtop->groups.grpname[mtop->groups.grps[egcENER].nm_ind[j]]), ftp2ext(efXVG));
- if (fp)
- {
- fprintf(fp, "loading tf table for energygrp index %d from %s\n", ir->adress->tf_table_index[i], buf);
- }
- fr->atf_tabs[i] = make_atf_table(fp, oenv, fr, buf, box);
- }
-
-}
-
gmx_bool can_use_allvsall(const t_inputrec *ir, gmx_bool bPrintNote, t_commrec *cr, FILE *fp)
{
gmx_bool bAllvsAll;
*kernel_type = nbnxnk4x4_PlainC;
*ewald_excl = ewaldexclTable;
-#ifdef GMX_NBNXN_SIMD
+#if GMX_SIMD
{
#ifdef GMX_NBNXN_SIMD_4XN
*kernel_type = nbnxnk4xN_SIMD_4xN;
*/
*kernel_type = nbnxnk4xN_SIMD_4xN;
-#ifndef GMX_SIMD_HAVE_FMA
+#if !GMX_SIMD_HAVE_FMA
if (EEL_PME_EWALD(ir->coulombtype) ||
EVDW_PME(ir->vdwtype))
{
* In single precision, this is faster on Bulldozer.
*/
#if GMX_SIMD_REAL_WIDTH >= 8 || \
- (GMX_SIMD_REAL_WIDTH >= 4 && defined GMX_SIMD_HAVE_FMA && !defined GMX_DOUBLE) || \
- defined GMX_SIMD_IBM_QPX
+ (GMX_SIMD_REAL_WIDTH >= 4 && GMX_SIMD_HAVE_FMA && !GMX_DOUBLE) || GMX_SIMD_IBM_QPX
*ewald_excl = ewaldexclAnalytical;
#endif
if (getenv("GMX_NBNXN_EWALD_TABLE") != NULL)
}
}
-#endif /* GMX_NBNXN_SIMD */
+#endif // GMX_SIMD
}
break;
case nbnxnk4xN_SIMD_4xN:
case nbnxnk4xN_SIMD_2xNN:
-#ifdef GMX_NBNXN_SIMD
-#if defined GMX_SIMD_X86_SSE2
- returnvalue = "SSE2";
-#elif defined GMX_SIMD_X86_SSE4_1
- returnvalue = "SSE4.1";
-#elif defined GMX_SIMD_X86_AVX_128_FMA
- returnvalue = "AVX_128_FMA";
-#elif defined GMX_SIMD_X86_AVX_256
- returnvalue = "AVX_256";
-#elif defined GMX_SIMD_X86_AVX2_256
- returnvalue = "AVX2_256";
-#else
+#if GMX_SIMD
returnvalue = "SIMD";
-#endif
-#else /* GMX_NBNXN_SIMD */
+#else // GMX_SIMD
returnvalue = "not available";
-#endif /* GMX_NBNXN_SIMD */
+#endif // GMX_SIMD
break;
case nbnxnk8x8x8_GPU: returnvalue = "GPU"; break;
case nbnxnk8x8x8_PlainC: returnvalue = "plain C"; break;
{
fprintf(fp, "\nUsing %s %dx%d non-bonded kernels\n\n",
lookup_nbnxn_kernel_name(*kernel_type),
- nbnxn_kernel_to_ci_size(*kernel_type),
- nbnxn_kernel_to_cj_size(*kernel_type));
+ nbnxn_kernel_to_cluster_i_size(*kernel_type),
+ nbnxn_kernel_to_cluster_j_size(*kernel_type));
if (nbnxnk4x4_PlainC == *kernel_type ||
nbnxnk8x8x8_PlainC == *kernel_type)
sfree_aligned(ic->tabq_vdw_F);
sfree_aligned(ic->tabq_vdw_V);
- if (ic->eeltype == eelEWALD || EEL_PME(ic->eeltype))
+ if (EEL_PME_EWALD(ic->eeltype))
{
/* Create the original table data in FDV0 */
snew_aligned(ic->tabq_coul_FDV0, ic->tabq_size*4, 32);
interaction_const_t *ic,
real rtab)
{
- if (ic->eeltype == eelEWALD || EEL_PME(ic->eeltype) || EVDW_PME(ic->vdwtype))
+ if (EEL_PME_EWALD(ic->eeltype) || EVDW_PME(ic->vdwtype))
{
init_ewald_f_table(ic, rtab);
* force/p = r^-(p+1) + c2*r^2 + c3*r^3
* potential = r^-p + c2/3*r^3 + c3/4*r^4 + cpot
*/
- sc->c2 = ((p + 1)*rsw - (p + 4)*rc)/(pow(rc, p + 2)*pow(rc - rsw, 2));
- sc->c3 = -((p + 1)*rsw - (p + 3)*rc)/(pow(rc, p + 2)*pow(rc - rsw, 3));
- sc->cpot = -pow(rc, -p) + p*sc->c2/3*pow(rc - rsw, 3) + p*sc->c3/4*pow(rc - rsw, 4);
+ sc->c2 = ((p + 1)*rsw - (p + 4)*rc)/(pow(rc, p + 2)*gmx::square(rc - rsw));
+ sc->c3 = -((p + 1)*rsw - (p + 3)*rc)/(pow(rc, p + 2)*gmx::power3(rc - rsw));
+ sc->cpot = -pow(rc, -p) + p*sc->c2/3*gmx::power3(rc - rsw) + p*sc->c3/4*gmx::power4(rc - rsw);
}
static void potential_switch_constants(real rsw, real rc,
* force = force*dsw - potential*sw
* potential *= sw
*/
- sc->c3 = -10*pow(rc - rsw, -3);
- sc->c4 = 15*pow(rc - rsw, -4);
- sc->c5 = -6*pow(rc - rsw, -5);
+ sc->c3 = -10/gmx::power3(rc - rsw);
+ sc->c4 = 15/gmx::power4(rc - rsw);
+ sc->c5 = -6/gmx::power5(rc - rsw);
}
/*! \brief Construct interaction constants
const t_forcerec *fr)
{
interaction_const_t *ic;
- const real minusSix = -6.0;
- const real minusTwelve = -12.0;
snew(ic, 1);
snew_aligned(ic->tabq_coul_V, 16, 32);
ic->rlist = fr->rlist;
- ic->rlistlong = fr->rlistlong;
/* Lennard-Jones */
ic->vdwtype = fr->vdwtype;
{
case eintmodPOTSHIFT:
/* Only shift the potential, don't touch the force */
- ic->dispersion_shift.cpot = -pow(ic->rvdw, minusSix);
- ic->repulsion_shift.cpot = -pow(ic->rvdw, minusTwelve);
+ ic->dispersion_shift.cpot = -1.0/gmx::power6(ic->rvdw);
+ ic->repulsion_shift.cpot = -1.0/gmx::power12(ic->rvdw);
if (EVDW_PME(ic->vdwtype))
{
real crc2;
- crc2 = sqr(ic->ewaldcoeff_lj*ic->rvdw);
- ic->sh_lj_ewald = (exp(-crc2)*(1 + crc2 + 0.5*crc2*crc2) - 1)*pow(ic->rvdw, minusSix);
+ crc2 = gmx::square(ic->ewaldcoeff_lj*ic->rvdw);
+ ic->sh_lj_ewald = (std::exp(-crc2)*(1 + crc2 + 0.5*crc2*crc2) - 1)/gmx::power6(ic->rvdw);
}
break;
case eintmodFORCESWITCH:
ic->epsfac = fr->epsfac;
ic->ewaldcoeff_q = fr->ewaldcoeff_q;
- if (fr->coulomb_modifier == eintmodPOTSHIFT)
+ if (EEL_PME_EWALD(ic->eeltype) && ic->coulomb_modifier == eintmodPOTSHIFT)
{
- ic->sh_ewald = std::erfc(ic->ewaldcoeff_q*ic->rcoulomb);
+ GMX_RELEASE_ASSERT(ic->rcoulomb != 0, "Cutoff radius cannot be zero");
- ic->sh_ewald = gmx_erfc(ic->ewaldcoeff_q*ic->rcoulomb) / ic->rcoulomb;
++ ic->sh_ewald = std::erfc(ic->ewaldcoeff_q*ic->rcoulomb) / ic->rcoulomb;
}
else
{
bSimpleList = nbnxn_kernel_pairlist_simple(nbv->grp[i].kernel_type);
- if (bSimpleList && (fr->vdwtype == evdwCUT && (fr->vdw_modifier == eintmodNONE || fr->vdw_modifier == eintmodPOTSHIFT)))
+ if (fr->vdwtype == evdwCUT &&
+ (fr->vdw_modifier == eintmodNONE ||
+ fr->vdw_modifier == eintmodPOTSHIFT) &&
+ getenv("GMX_NO_LJ_COMB_RULE") == NULL)
{
/* Plain LJ cut-off: we can optimize with combination rules */
enbnxninitcombrule = enbnxninitcombruleDETECT;
{
/* init the NxN GPU data; the last argument tells whether we'll have
* both local and non-local NB calculation on GPU */
- nbnxn_gpu_init(fp, &nbv->gpu_nbv,
+ nbnxn_gpu_init(&nbv->gpu_nbv,
&fr->hwinfo->gpu_info,
fr->gpu_opt,
fr->ic,
* texture objects are used), but as this is initialization code, there
* is no point in complicating things.
*/
-#ifdef GMX_THREAD_MPI
+#if GMX_THREAD_MPI
if (PAR(cr))
{
gmx_barrier(cr);
char *end;
nbv->min_ci_balanced = strtol(env, &end, 10);
- if (!end || (*end != 0) || nbv->min_ci_balanced <= 0)
+ if (!end || (*end != 0) || nbv->min_ci_balanced < 0)
{
- gmx_fatal(FARGS, "Invalid value passed in GMX_NB_MIN_CI=%s, positive integer required", env);
+ gmx_fatal(FARGS, "Invalid value passed in GMX_NB_MIN_CI=%s, non-negative integer required", env);
}
if (debug)
}
void init_forcerec(FILE *fp,
- const output_env_t oenv,
t_forcerec *fr,
t_fcdata *fcd,
const t_inputrec *ir,
const t_commrec *cr,
matrix box,
const char *tabfn,
- const char *tabafn,
const char *tabpfn,
const t_filenm *tabbfnm,
const char *nbpu_opt,
double dbl;
const t_block *cgs;
gmx_bool bGenericKernelOnly;
- gmx_bool bMakeTables, bMakeSeparate14Table, bSomeNormalNbListsAreInUse;
+ gmx_bool needGroupSchemeTables, bSomeNormalNbListsAreInUse;
gmx_bool bFEP_NonBonded;
int *nm_ind, egp_flags;
fr->n_tpi = 0;
}
- /* Copy AdResS parameters */
- if (ir->bAdress)
+ if (ir->coulombtype == eelRF_NEC_UNSUPPORTED)
{
- fr->adress_type = ir->adress->type;
- fr->adress_const_wf = ir->adress->const_wf;
- fr->adress_ex_width = ir->adress->ex_width;
- fr->adress_hy_width = ir->adress->hy_width;
- fr->adress_icor = ir->adress->icor;
- fr->adress_site = ir->adress->site;
- fr->adress_ex_forcecap = ir->adress->ex_forcecap;
- fr->adress_do_hybridpairs = ir->adress->do_hybridpairs;
-
-
- snew(fr->adress_group_explicit, ir->adress->n_energy_grps);
- for (i = 0; i < ir->adress->n_energy_grps; i++)
- {
- fr->adress_group_explicit[i] = ir->adress->group_explicit[i];
- }
+ gmx_fatal(FARGS, "%s electrostatics is no longer supported",
+ eel_names[ir->coulombtype]);
+ }
- fr->n_adress_tf_grps = ir->adress->n_tf_grps;
- snew(fr->adress_tf_table_index, fr->n_adress_tf_grps);
- for (i = 0; i < fr->n_adress_tf_grps; i++)
- {
- fr->adress_tf_table_index[i] = ir->adress->tf_table_index[i];
- }
- copy_rvec(ir->adress->refs, fr->adress_refs);
+ if (ir->bAdress)
+ {
+ gmx_fatal(FARGS, "AdResS simulations are no longer supported");
}
- else
+ if (ir->useTwinRange)
{
- fr->adress_type = eAdressOff;
- fr->adress_do_hybridpairs = FALSE;
+ gmx_fatal(FARGS, "Twin-range simulations are no longer supported");
}
-
/* Copy the user determined parameters */
fr->userint1 = ir->userint1;
fr->userint2 = ir->userint2;
if (ir->fepvals->bScCoul)
{
fr->sc_alphacoul = ir->fepvals->sc_alpha;
- fr->sc_sigma6_min = pow(ir->fepvals->sc_sigma_min, 6);
+ fr->sc_sigma6_min = gmx::power6(ir->fepvals->sc_sigma_min);
}
else
{
}
fr->sc_power = ir->fepvals->sc_power;
fr->sc_r_power = ir->fepvals->sc_r_power;
- fr->sc_sigma6_def = pow(ir->fepvals->sc_sigma, 6);
+ fr->sc_sigma6_def = gmx::power6(ir->fepvals->sc_sigma);
env = getenv("GMX_SCSIGMA_MIN");
if (env != NULL)
{
dbl = 0;
sscanf(env, "%20lf", &dbl);
- fr->sc_sigma6_min = pow(dbl, 6);
+ fr->sc_sigma6_min = gmx::power6(dbl);
if (fp)
{
fprintf(fp, "Setting the minimum soft core sigma to %g nm\n", dbl);
copy_rvec(ir->posres_com, fr->posres_com);
copy_rvec(ir->posres_comB, fr->posres_comB);
fr->rlist = cutoff_inf(ir->rlist);
- fr->rlistlong = cutoff_inf(ir->rlistlong);
fr->eeltype = ir->coulombtype;
fr->vdwtype = ir->vdwtype;
fr->ljpme_combination_rule = ir->ljpme_combination_rule;
case eelRF:
case eelGRF:
- case eelRF_NEC:
fr->nbkernel_elec_interaction = GMX_NBKERNEL_ELEC_REACTIONFIELD;
break;
fr->rcoulomb = cutoff_inf(ir->rcoulomb);
fr->rcoulomb_switch = ir->rcoulomb_switch;
- fr->bTwinRange = fr->rlistlong > fr->rlist;
- fr->bEwald = (EEL_PME(fr->eeltype) || fr->eeltype == eelEWALD);
+ fr->bEwald = EEL_PME_EWALD(fr->eeltype);
fr->reppow = mtop->ffparams.reppow;
if (fp)
{
- fprintf(fp, "Table routines are used for coulomb: %s\n", bool_names[fr->bcoultab]);
- fprintf(fp, "Table routines are used for vdw: %s\n", bool_names[fr->bvdwtab ]);
+ fprintf(fp, "Table routines are used for coulomb: %s\n",
+ gmx::boolToString(fr->bcoultab));
+ fprintf(fp, "Table routines are used for vdw: %s\n",
+ gmx::boolToString(fr->bvdwtab));
}
if (fr->bvdwtab == TRUE)
fr->bF_NoVirSum = (EEL_FULL(fr->eeltype) || EVDW_PME(fr->vdwtype) ||
gmx_mtop_ftype_count(mtop, F_POSRES) > 0 ||
gmx_mtop_ftype_count(mtop, F_FBPOSRES) > 0 ||
- IR_ELEC_FIELD(*ir) ||
- (fr->adress_icor != eAdressICOff)
+ inputrecElecField(ir)
);
if (fr->cutoff_scheme == ecutsGROUP &&
}
fr->eDispCorr = ir->eDispCorr;
+ fr->numAtomsForDispersionCorrection = mtop->natoms;
if (ir->eDispCorr != edispcNO)
{
set_avcsixtwelve(fp, fr, mtop);
/* Generate the GB table if needed */
if (fr->bGB)
{
-#ifdef GMX_DOUBLE
+#if GMX_DOUBLE
fr->gbtabscale = 2000;
#else
fr->gbtabscale = 500;
#endif
fr->gbtabr = 100;
- fr->gbtab = make_gb_table(oenv, fr);
+ fr->gbtab = make_gb_table(fr);
init_gb(&fr->born, fr, ir, mtop, ir->gb_algorithm);
/*This now calculates sum for q and c6*/
set_chargesum(fp, fr, mtop);
- /* if we are using LR electrostatics, and they are tabulated,
- * the tables will contain modified coulomb interactions.
- * Since we want to use the non-shifted ones for 1-4
- * coulombic interactions, we must have an extra set of tables.
- */
-
- /* Construct tables.
- * A little unnecessary to make both vdw and coul tables sometimes,
- * but what the heck... */
-
- bMakeTables = fr->bcoultab || fr->bvdwtab || fr->bEwald ||
- (ir->eDispCorr != edispcNO && ir_vdw_switched(ir));
-
- bMakeSeparate14Table = ((!bMakeTables || fr->eeltype != eelCUT || fr->vdwtype != evdwCUT ||
- fr->coulomb_modifier != eintmodNONE ||
- fr->vdw_modifier != eintmodNONE ||
- fr->bBHAM || fr->bEwald) &&
- (gmx_mtop_ftype_count(mtop, F_LJ14) > 0 ||
- gmx_mtop_ftype_count(mtop, F_LJC14_Q) > 0 ||
- gmx_mtop_ftype_count(mtop, F_LJC_PAIRS_NB) > 0));
+ /* Construct tables for the group scheme. A little unnecessary to
+ * make both vdw and coul tables sometimes, but what the
+ * heck. Note that both cutoff schemes construct Ewald tables in
+ * init_interaction_const_tables. */
+ needGroupSchemeTables = (ir->cutoff_scheme == ecutsGROUP &&
+ (fr->bcoultab || fr->bvdwtab));
negp_pp = ir->opts.ngener - ir->nwall;
negptable = 0;
- if (!bMakeTables)
+ if (!needGroupSchemeTables)
{
bSomeNormalNbListsAreInUse = TRUE;
fr->nnblists = 1;
}
else
{
- bSomeNormalNbListsAreInUse = (ir->eDispCorr != edispcNO);
+ bSomeNormalNbListsAreInUse = FALSE;
for (egi = 0; egi < negp_pp; egi++)
{
for (egj = egi; egj < negp_pp; egj++)
}
}
- if (ir->adress)
- {
- fr->nnblists *= 2;
- }
-
snew(fr->nblists, fr->nnblists);
/* This code automatically gives table length tabext without cut-off's,
* in that case grompp should already have checked that we do not need
* normal tables and we only generate tables for 1-4 interactions.
*/
- rtab = ir->rlistlong + ir->tabext;
+ rtab = ir->rlist + ir->tabext;
- if (bMakeTables)
+ if (needGroupSchemeTables)
{
/* make tables for ordinary interactions */
if (bSomeNormalNbListsAreInUse)
{
- make_nbf_tables(fp, oenv, fr, rtab, cr, tabfn, NULL, NULL, &fr->nblists[0]);
- if (ir->adress)
- {
- make_nbf_tables(fp, oenv, fr, rtab, cr, tabfn, NULL, NULL, &fr->nblists[fr->nnblists/2]);
- }
- if (!bMakeSeparate14Table)
- {
- fr->tab14 = fr->nblists[0].table_elec_vdw;
- }
+ make_nbf_tables(fp, fr, rtab, tabfn, NULL, NULL, &fr->nblists[0]);
m = 1;
}
else
fr->gid2nblists[GID(egi, egj, ir->opts.ngener)] = m;
}
/* Read the table file with the two energy groups names appended */
- make_nbf_tables(fp, oenv, fr, rtab, cr, tabfn,
+ make_nbf_tables(fp, fr, rtab, tabfn,
*mtop->groups.grpname[nm_ind[egi]],
*mtop->groups.grpname[nm_ind[egj]],
&fr->nblists[m]);
- if (ir->adress)
- {
- make_nbf_tables(fp, oenv, fr, rtab, cr, tabfn,
- *mtop->groups.grpname[nm_ind[egi]],
- *mtop->groups.grpname[nm_ind[egj]],
- &fr->nblists[fr->nnblists/2+m]);
- }
m++;
}
else if (fr->nnblists > 1)
}
}
}
- else if ((fr->eDispCorr != edispcNO) &&
- ((fr->vdw_modifier == eintmodPOTSWITCH) ||
- (fr->vdw_modifier == eintmodFORCESWITCH) ||
- (fr->vdw_modifier == eintmodPOTSHIFT)))
- {
- /* Tables might not be used for the potential modifier interactions per se, but
- * we still need them to evaluate switch/shift dispersion corrections in this case.
- */
- make_nbf_tables(fp, oenv, fr, rtab, cr, tabfn, NULL, NULL, &fr->nblists[0]);
- }
- if (bMakeSeparate14Table)
+ /* Tables might not be used for the potential modifier
+ * interactions per se, but we still need them to evaluate
+ * switch/shift dispersion corrections in this case. */
+ if (fr->eDispCorr != edispcNO)
{
- /* generate extra tables with plain Coulomb for 1-4 interactions only */
- fr->tab14 = make_tables(fp, oenv, fr, MASTER(cr), tabpfn, rtab,
- GMX_MAKETABLES_14ONLY);
+ fr->dispersionCorrectionTable = makeDispersionCorrectionTable(fp, fr, rtab, tabfn);
}
- /* Read AdResS Thermo Force table if needed */
- if (fr->adress_icor == eAdressICThermoForce)
+ /* We want to use unmodified tables for 1-4 coulombic
+ * interactions, so we must in general have an extra set of
+ * tables. */
+ if (gmx_mtop_ftype_count(mtop, F_LJ14) > 0 ||
+ gmx_mtop_ftype_count(mtop, F_LJC14_Q) > 0 ||
+ gmx_mtop_ftype_count(mtop, F_LJC_PAIRS_NB) > 0)
{
- /* old todo replace */
-
- if (ir->adress->n_tf_grps > 0)
- {
- make_adress_tf_tables(fp, oenv, fr, ir, tabfn, mtop, box);
-
- }
- else
- {
- /* load the default table */
- snew(fr->atf_tabs, 1);
- fr->atf_tabs[DEFAULT_TF_TABLE] = make_atf_table(fp, oenv, fr, tabafn, box);
- }
+ fr->pairsTable = make_tables(fp, fr, tabpfn, rtab,
+ GMX_MAKETABLES_14ONLY);
}
/* Wall stuff */
fr->nwall = ir->nwall;
if (ir->nwall && ir->wall_type == ewtTABLE)
{
- make_wall_tables(fp, oenv, ir, tabfn, &mtop->groups, fr);
+ make_wall_tables(fp, ir, tabfn, &mtop->groups, fr);
}
if (fcd && tabbfnm)
fr->timesteps = 0;
/* Initialize neighbor search */
- init_ns(fp, cr, &fr->ns, fr, mtop);
+ snew(fr->ns, 1);
+ init_ns(fp, cr, fr->ns, fr, mtop);
if (cr->duty & DUTY_PP)
{
gmx_nonbonded_setup(fr, bGenericKernelOnly);
- /*
- if (ir->bAdress)
- {
- gmx_setup_adress_kernels(fp,bGenericKernelOnly);
- }
- */
}
/* Initialize the thread working data for bonded interactions */
- init_bonded_threading(fp, fr, mtop->groups.grps[egcENER].nr);
+ init_bonded_threading(fp, mtop->groups.grps[egcENER].nr,
+ &fr->bonded_threading);
+
+ fr->nthread_ewc = gmx_omp_nthreads_get(emntBonded);
+ snew(fr->ewc_t, fr->nthread_ewc);
/* fr->ic is used both by verlet and group kernels (to some extent) now */
init_interaction_const(fp, &fr->ic, fr);
if (fr->cutoff_scheme == ecutsVERLET)
{
- if (ir->rcoulomb != ir->rvdw)
+ // We checked the cut-offs in grompp, but double-check here.
+ // We have PME+LJcutoff kernels for rcoulomb>rvdw.
+ if (EEL_PME_EWALD(ir->coulombtype) && ir->vdwtype == eelCUT)
{
- gmx_fatal(FARGS, "With Verlet lists rcoulomb and rvdw should be identical");
+ GMX_RELEASE_ASSERT(ir->rcoulomb >= ir->rvdw, "With Verlet lists and PME we should have rcoulomb>=rvdw");
+ }
+ else
+ {
+ GMX_RELEASE_ASSERT(ir->rcoulomb == ir->rvdw, "With Verlet lists and no PME rcoulomb and rvdw should be identical");
}
init_nb_verlet(fp, &fr->nbv, bFEP_NonBonded, ir, fr, cr, nbpu_opt);
#define pr_real(fp, r) fprintf(fp, "%s: %e\n",#r, r)
#define pr_int(fp, i) fprintf((fp), "%s: %d\n",#i, i)
-#define pr_bool(fp, b) fprintf((fp), "%s: %s\n",#b, bool_names[b])
+#define pr_bool(fp, b) fprintf((fp), "%s: %s\n",#b, gmx::boolToString(b))
void pr_forcerec(FILE *fp, t_forcerec *fr)
{
pr_real(fp, fr->rcoulomb);
pr_real(fp, fr->fudgeQQ);
pr_bool(fp, fr->bGrid);
- pr_bool(fp, fr->bTwinRange);
/*pr_int(fp,fr->cg0);
pr_int(fp,fr->hcg);*/
for (i = 0; i < fr->nnblists; i++)
{
- pr_int(fp, fr->nblists[i].table_elec_vdw.n);
+ pr_int(fp, fr->nblists[i].table_elec_vdw->n);
}
pr_real(fp, fr->rcoulomb_switch);
pr_real(fp, fr->rcoulomb);
{
/* free nbnxn data in GPU memory */
nbnxn_gpu_free(fr->nbv->gpu_nbv);
+ /* stop the GPU profiler (only CUDA) */
+ stopGpuProfiler();
/* With tMPI we need to wait for all ranks to finish deallocation before
- * destroying the context in free_gpu() as some ranks may be sharing
+ * destroying the CUDA context in free_gpu() as some tMPI ranks may be sharing
* GPU and context.
+ *
+ * This is not a concern in OpenCL where we use one context per rank which
+ * is freed in nbnxn_gpu_free().
+ *
* Note: as only PP ranks need to free GPU resources, so it is safe to
* not call the barrier on PME ranks.
*/
-#ifdef GMX_THREAD_MPI
+#if GMX_THREAD_MPI
if (PAR(cr))
{
gmx_barrier(cr);
*/
#include "gmxpre.h"
+#include "md.h"
+
#include "config.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
+#include <algorithm>
+
#include "thread_mpi/threads.h"
+#include "gromacs/commandline/filenm.h"
#include "gromacs/domdec/domdec.h"
#include "gromacs/domdec/domdec_network.h"
-#include "gromacs/ewald/pme-load-balancing.h"
+#include "gromacs/domdec/domdec_struct.h"
#include "gromacs/ewald/pme.h"
-#include "gromacs/fileio/filenm.h"
-#include "gromacs/fileio/mdoutf.h"
-#include "gromacs/fileio/trajectory_writing.h"
-#include "gromacs/fileio/trx.h"
+#include "gromacs/ewald/pme-load-balancing.h"
#include "gromacs/fileio/trxio.h"
+#include "gromacs/gmxlib/md_logging.h"
+#include "gromacs/gmxlib/network.h"
+#include "gromacs/gmxlib/nrnb.h"
+#include "gromacs/gpu_utils/gpu_utils.h"
#include "gromacs/imd/imd.h"
-#include "gromacs/legacyheaders/constr.h"
-#include "gromacs/legacyheaders/ebin.h"
-#include "gromacs/legacyheaders/force.h"
-#include "gromacs/legacyheaders/md_logging.h"
-#include "gromacs/legacyheaders/md_support.h"
-#include "gromacs/legacyheaders/mdatoms.h"
-#include "gromacs/legacyheaders/mdebin.h"
-#include "gromacs/legacyheaders/mdrun.h"
-#include "gromacs/legacyheaders/network.h"
-#include "gromacs/legacyheaders/nrnb.h"
-#include "gromacs/legacyheaders/ns.h"
-#include "gromacs/legacyheaders/shellfc.h"
-#include "gromacs/legacyheaders/sighandler.h"
-#include "gromacs/legacyheaders/sim_util.h"
-#include "gromacs/legacyheaders/tgroup.h"
-#include "gromacs/legacyheaders/typedefs.h"
-#include "gromacs/legacyheaders/update.h"
-#include "gromacs/legacyheaders/vcm.h"
-#include "gromacs/legacyheaders/vsite.h"
-#include "gromacs/legacyheaders/types/commrec.h"
-#include "gromacs/legacyheaders/types/constr.h"
-#include "gromacs/legacyheaders/types/enums.h"
-#include "gromacs/legacyheaders/types/fcdata.h"
-#include "gromacs/legacyheaders/types/force_flags.h"
-#include "gromacs/legacyheaders/types/forcerec.h"
-#include "gromacs/legacyheaders/types/group.h"
-#include "gromacs/legacyheaders/types/inputrec.h"
-#include "gromacs/legacyheaders/types/interaction_const.h"
-#include "gromacs/legacyheaders/types/mdatom.h"
-#include "gromacs/legacyheaders/types/membedt.h"
-#include "gromacs/legacyheaders/types/nrnb.h"
-#include "gromacs/legacyheaders/types/oenv.h"
-#include "gromacs/legacyheaders/types/shellfc.h"
-#include "gromacs/legacyheaders/types/state.h"
#include "gromacs/listed-forces/manage-threading.h"
+#include "gromacs/math/functions.h"
#include "gromacs/math/utilities.h"
#include "gromacs/math/vec.h"
#include "gromacs/math/vectypes.h"
#include "gromacs/mdlib/compute_io.h"
-#include "gromacs/mdlib/mdrun_signalling.h"
+#include "gromacs/mdlib/constr.h"
+#include "gromacs/mdlib/ebin.h"
+#include "gromacs/mdlib/force.h"
+#include "gromacs/mdlib/force_flags.h"
+#include "gromacs/mdlib/forcerec.h"
+#include "gromacs/mdlib/md_support.h"
+#include "gromacs/mdlib/mdatoms.h"
+#include "gromacs/mdlib/mdebin.h"
+#include "gromacs/mdlib/mdoutf.h"
+#include "gromacs/mdlib/mdrun.h"
#include "gromacs/mdlib/nb_verlet.h"
#include "gromacs/mdlib/nbnxn_gpu_data_mgmt.h"
+#include "gromacs/mdlib/ns.h"
+#include "gromacs/mdlib/shellfc.h"
+#include "gromacs/mdlib/sighandler.h"
+#include "gromacs/mdlib/sim_util.h"
+#include "gromacs/mdlib/simulationsignal.h"
+#include "gromacs/mdlib/tgroup.h"
+#include "gromacs/mdlib/trajectory_writing.h"
+#include "gromacs/mdlib/update.h"
+#include "gromacs/mdlib/vcm.h"
+#include "gromacs/mdlib/vsite.h"
+#include "gromacs/mdtypes/commrec.h"
+#include "gromacs/mdtypes/df_history.h"
+#include "gromacs/mdtypes/energyhistory.h"
+#include "gromacs/mdtypes/fcdata.h"
+#include "gromacs/mdtypes/forcerec.h"
+#include "gromacs/mdtypes/group.h"
+#include "gromacs/mdtypes/inputrec.h"
+#include "gromacs/mdtypes/interaction_const.h"
+#include "gromacs/mdtypes/md_enums.h"
+#include "gromacs/mdtypes/mdatom.h"
+#include "gromacs/mdtypes/state.h"
#include "gromacs/pbcutil/mshift.h"
#include "gromacs/pbcutil/pbc.h"
#include "gromacs/pulling/pull.h"
#include "gromacs/topology/idef.h"
#include "gromacs/topology/mtop_util.h"
#include "gromacs/topology/topology.h"
+#include "gromacs/trajectory/trajectoryframe.h"
#include "gromacs/utility/basedefinitions.h"
#include "gromacs/utility/cstringutil.h"
#include "gromacs/utility/fatalerror.h"
#include "corewrap.h"
#endif
+using gmx::SimulationSignaller;
+
+/*! \brief Check whether bonded interactions are missing, if appropriate
+ *
+ * \param[in] fplog Log file pointer
+ * \param[in] cr Communication object
+ * \param[in] totalNumberOfBondedInteractions Result of the global reduction over the number of bonds treated in each domain
+ * \param[in] top_global Global topology for the error message
+ * \param[in] top_local Local topology for the error message
+ * \param[in] state Global state for the error message
+ * \param[inout] shouldCheckNumberOfBondedInteractions Whether we should do the check.
+ *
+ * \return Nothing, except that shouldCheckNumberOfBondedInteractions
+ * is always set to false after exit.
+ */
+static void checkNumberOfBondedInteractions(FILE *fplog, t_commrec *cr, int totalNumberOfBondedInteractions,
+ gmx_mtop_t *top_global, gmx_localtop_t *top_local, t_state *state,
+ bool *shouldCheckNumberOfBondedInteractions)
+{
+ if (*shouldCheckNumberOfBondedInteractions)
+ {
+ if (totalNumberOfBondedInteractions != cr->dd->nbonded_global)
+ {
+ dd_print_missing_interactions(fplog, cr, totalNumberOfBondedInteractions, top_global, top_local, state); // Does not return
+ }
+ *shouldCheckNumberOfBondedInteractions = false;
+ }
+}
+
static void reset_all_counters(FILE *fplog, t_commrec *cr,
gmx_int64_t step,
gmx_int64_t *step_rel, t_inputrec *ir,
if (use_GPU(nbv))
{
nbnxn_gpu_reset_timings(nbv);
+ resetGpuProfiler();
}
wallcycle_stop(wcycle, ewcRUN);
print_date_and_time(fplog, cr->nodeid, "Restarted time", gmx_gettime());
}
-double do_md(FILE *fplog, t_commrec *cr, int nfile, const t_filenm fnm[],
- const output_env_t oenv, gmx_bool bVerbose, gmx_bool bCompact,
- int nstglobalcomm,
- gmx_vsite_t *vsite, gmx_constr_t constr,
- int stepout, t_inputrec *ir,
- gmx_mtop_t *top_global,
- t_fcdata *fcd,
- t_state *state_global,
- t_mdatoms *mdatoms,
- t_nrnb *nrnb, gmx_wallcycle_t wcycle,
- gmx_edsam_t ed, t_forcerec *fr,
- int repl_ex_nst, int repl_ex_nex, int repl_ex_seed, gmx_membed_t membed,
- real cpt_period, real max_hours,
- int imdport,
- unsigned long Flags,
- gmx_walltime_accounting_t walltime_accounting)
+/*! \libinternal
+ \copydoc integrator_t (FILE *fplog, t_commrec *cr,
+ int nfile, const t_filenm fnm[],
+ const gmx_output_env_t *oenv, gmx_bool bVerbose,
+ int nstglobalcomm,
+ gmx_vsite_t *vsite, gmx_constr_t constr,
+ int stepout,
+ t_inputrec *inputrec,
+ gmx_mtop_t *top_global, t_fcdata *fcd,
+ t_state *state_global,
+ t_mdatoms *mdatoms,
+ t_nrnb *nrnb, gmx_wallcycle_t wcycle,
+ gmx_edsam_t ed,
+ t_forcerec *fr,
+ int repl_ex_nst, int repl_ex_nex, int repl_ex_seed,
+ real cpt_period, real max_hours,
+ int imdport,
+ unsigned long Flags,
+ gmx_walltime_accounting_t walltime_accounting)
+ */
+double gmx::do_md(FILE *fplog, t_commrec *cr, int nfile, const t_filenm fnm[],
+ const gmx_output_env_t *oenv, gmx_bool bVerbose,
+ int nstglobalcomm,
+ gmx_vsite_t *vsite, gmx_constr_t constr,
+ int stepout, t_inputrec *ir,
+ gmx_mtop_t *top_global,
+ t_fcdata *fcd,
+ t_state *state_global,
+ t_mdatoms *mdatoms,
+ t_nrnb *nrnb, gmx_wallcycle_t wcycle,
+ gmx_edsam_t ed, t_forcerec *fr,
+ int repl_ex_nst, int repl_ex_nex, int repl_ex_seed,
+ gmx_membed_t *membed,
+ real cpt_period, real max_hours,
+ int imdport,
+ unsigned long Flags,
+ gmx_walltime_accounting_t walltime_accounting)
{
gmx_mdoutf_t outf = NULL;
gmx_int64_t step, step_rel;
double elapsed_time;
double t, t0, lam0[efptNR];
gmx_bool bGStatEveryStep, bGStat, bCalcVir, bCalcEnerStep, bCalcEner;
- gmx_bool bNS, bNStList, bSimAnn, bStopCM, bRerunMD, bNotLastFrame = FALSE,
- bFirstStep, bStateFromCP, bStateFromTPX, bInitStep, bLastStep,
- bBornRadii, bStartingFromCpt;
+ gmx_bool bNS, bNStList, bSimAnn, bStopCM, bRerunMD,
+ bFirstStep, startingFromCheckpoint, bInitStep, bLastStep = FALSE,
+ bBornRadii, bUsingEnsembleRestraints;
gmx_bool bDoDHDL = FALSE, bDoFEP = FALSE, bDoExpanded = FALSE;
gmx_bool do_ene, do_log, do_verbose, bRerunWarnNoV = TRUE,
bForceUpdate = FALSE, bCPT;
gmx_localtop_t *top;
t_mdebin *mdebin = NULL;
t_state *state = NULL;
- rvec *f_global = NULL;
gmx_enerdata_t *enerd;
rvec *f = NULL;
gmx_global_stat_t gstat;
- gmx_update_t upd = NULL;
+ gmx_update_t *upd = NULL;
t_graph *graph = NULL;
- gmx_signalling_t gs;
gmx_groups_t *groups;
gmx_ekindata_t *ekind;
- gmx_shellfc_t shellfc;
- int count, nconverged = 0;
- double tcount = 0;
- gmx_bool bConverged = TRUE, bSumEkinhOld, bDoReplEx, bExchanged, bNeedRepartition;
+ gmx_shellfc_t *shellfc;
+ gmx_bool bSumEkinhOld, bDoReplEx, bExchanged, bNeedRepartition;
gmx_bool bResetCountersHalfMaxH = FALSE;
- gmx_bool bVV, bTemp, bPres, bTrotter;
- gmx_bool bUpdateDoLR;
+ gmx_bool bTemp, bPres, bTrotter;
real dvdl_constr;
rvec *cbuf = NULL;
int cbuf_nalloc = 0;
int **trotter_seq;
char sbuf[STEPSTRSIZE], sbuf2[STEPSTRSIZE];
int handled_stop_condition = gmx_stop_cond_none; /* compare to get_stop_condition*/
- gmx_int64_t multisim_nsteps = -1; /* number of steps to do before first multisim
- simulation stops. If equal to zero, don't
- communicate any more between multisims.*/
+
+
/* PME load balancing data for GPU kernels */
pme_load_balancing_t *pme_loadbal = NULL;
gmx_bool bPMETune = FALSE;
/* Temporary addition for FAHCORE checkpointing */
int chkpt_ret;
#endif
+ /* Domain decomposition could incorrectly miss a bonded
+ interaction, but checking for that requires a global
+ communication stage, which does not otherwise happen in DD
+ code. So we do that alongside the first global energy reduction
+ after a new DD is made. These variables handle whether the
+ check happens, and the result it returns. */
+ bool shouldCheckNumberOfBondedInteractions = false;
+ int totalNumberOfBondedInteractions = -1;
+
+ SimulationSignals signals;
+ // Most global communnication stages don't propagate mdrun
+ // signals, and will use this object to achieve that.
+ SimulationSignaller nullSignaller(nullptr, nullptr, false, false);
/* Check for special mdrun options */
bRerunMD = (Flags & MD_RERUN);
/* md-vv uses averaged full step velocities for T-control
md-vv-avek uses averaged half step velocities for T-control (but full step ekin for P control)
md uses averaged half step kinetic energies to determine temperature unless defined otherwise by GMX_EKIN_AVE_VEL; */
- bVV = EI_VV(ir->eI);
- bTrotter = (bVV && (IR_NPT_TROTTER(ir) || IR_NPH_TROTTER(ir) || IR_NVT_TROTTER(ir)));
+ bTrotter = (EI_VV(ir->eI) && (inputrecNptTrotter(ir) || inputrecNphTrotter(ir) || inputrecNvtTrotter(ir)));
if (bRerunMD)
{
nstglobalcomm = 1;
}
- check_ir_old_tpx_versions(cr, fplog, ir, top_global);
-
nstglobalcomm = check_nstglobalcomm(fplog, cr, nstglobalcomm, ir);
bGStatEveryStep = (nstglobalcomm == 1);
}
groups = &top_global->groups;
+ if (ir->eSwapCoords != eswapNO)
+ {
+ /* Initialize ion swapping code */
+ init_swapcoords(fplog, bVerbose, ir, opt2fn_master("-swap", nfile, fnm, cr),
+ top_global, state_global->x, state_global->box, &state_global->swapstate, cr, oenv, Flags);
+ }
+
/* Initial values */
init_md(fplog, cr, ir, oenv, &t, &t0, state_global->lambda,
&(state_global->fep_state), lam0,
ekind->cosacc.cos_accel = ir->cos_accel;
gstat = global_stat_init(ir);
- debug_gmx();
/* Check for polarizable models and flexible constraints */
shellfc = init_shell_flexcon(fplog,
top_global, n_flexible_constraints(constr),
- (ir->bContinuation ||
- (DOMAINDECOMP(cr) && !MASTER(cr))) ?
- NULL : state_global->x);
+ ir->nstcalcenergy, DOMAINDECOMP(cr));
+
if (shellfc && ir->nstcalcenergy != 1)
{
gmx_fatal(FARGS, "You have nstcalcenergy set to a value (%d) that is different from 1.\nThis is not supported in combinations with shell particles.\nPlease make a new tpr file.", ir->nstcalcenergy);
{
gmx_fatal(FARGS, "Shell particles are not implemented with domain decomposition, use a single rank");
}
- if (shellfc && ir->eI == eiNM)
- {
- /* Currently shells don't work with Normal Modes */
- gmx_fatal(FARGS, "Normal Mode analysis is not supported with shells.\nIf you'd like to help with adding support, we have an open discussion at http://redmine.gromacs.org/issues/879\n");
- }
-
- if (vsite && ir->eI == eiNM)
- {
- /* Currently virtual sites don't work with Normal Modes */
- gmx_fatal(FARGS, "Normal Mode analysis is not supported with virtual sites.\nIf you'd like to help with adding support, we have an open discussion at http://redmine.gromacs.org/issues/879\n");
- }
- if (DEFORM(*ir))
+ if (inputrecDeform(ir))
{
tMPI_Thread_mutex_lock(&deform_init_box_mutex);
set_deform_reference_box(upd,
snew(state, 1);
dd_init_local_state(cr->dd, state_global, state);
-
- if (DDMASTER(cr->dd) && ir->nstfout)
- {
- snew(f_global, state_global->natoms);
- }
}
else
{
- top = gmx_mtop_generate_local_top(top_global, ir);
+ top = gmx_mtop_generate_local_top(top_global, ir->efep != efepNO);
state = serial_init_local_state(state_global);
- f_global = f;
atoms2md(top_global, ir, 0, NULL, top_global->natoms, mdatoms);
}
setup_bonded_threading(fr, &top->idef);
+
+ update_realloc(upd, state->nalloc);
}
/* Set up interactive MD (IMD) */
dd_partition_system(fplog, ir->init_step, cr, TRUE, 1,
state_global, top_global, ir,
state, &f, mdatoms, top, fr,
- vsite, shellfc, constr,
+ vsite, constr,
nrnb, NULL, FALSE);
-
+ shouldCheckNumberOfBondedInteractions = true;
+ update_realloc(upd, state->nalloc);
}
update_mdatoms(mdatoms, state->lambda[efptMASS]);
- if (opt2bSet("-cpi", nfile, fnm))
- {
- bStateFromCP = gmx_fexist_master(opt2fn_master("-cpi", nfile, fnm, cr), cr);
- }
- else
- {
- bStateFromCP = FALSE;
- }
+ startingFromCheckpoint = Flags & MD_STARTFROMCPT;
if (ir->bExpanded)
{
- init_expanded_ensemble(bStateFromCP, ir, &state->dfhist);
+ init_expanded_ensemble(startingFromCheckpoint, ir, &state->dfhist);
}
if (MASTER(cr))
{
- if (bStateFromCP)
+ if (startingFromCheckpoint)
{
/* Update mdebin with energy history if appending to output files */
if (Flags & MD_APPENDFILES)
{
- restore_energyhistory_from_state(mdebin, &state_global->enerhist);
+ restore_energyhistory_from_state(mdebin, state_global->enerhist);
}
else
{
/* We might have read an energy history from checkpoint,
* free the allocated memory and reset the counts.
*/
- done_energyhistory(&state_global->enerhist);
- init_energyhistory(&state_global->enerhist);
+ done_energyhistory(state_global->enerhist);
+ init_energyhistory(state_global->enerhist);
}
}
/* Set the initial energy history in state by updating once */
- update_energyhistory(&state_global->enerhist, mdebin);
+ update_energyhistory(state_global->enerhist, mdebin);
}
/* Initialize constraints */
repl_ex_nst, repl_ex_nex, repl_ex_seed);
}
- /* PME tuning is only supported with PME for Coulomb. Is is not supported
- * with only LJ PME, or for reruns.
- */
+ /* PME tuning is only supported in the Verlet scheme, with PME for
+ * Coulomb. It is not supported with only LJ PME, or for
+ * reruns. */
bPMETune = ((Flags & MD_TUNEPME) && EEL_PME(fr->eeltype) && !bRerunMD &&
- !(Flags & MD_REPRODUCIBLE));
+ !(Flags & MD_REPRODUCIBLE) && ir->cutoff_scheme != ecutsGROUP);
if (bPMETune)
{
pme_loadbal_init(&pme_loadbal, cr, fplog, ir, state->box,
}
}
- debug_gmx();
-
- if (IR_TWINRANGE(*ir) && repl_ex_nst % ir->nstcalclr != 0)
- {
- /* We should exchange at nstcalclr steps to get correct integration */
- gmx_fatal(FARGS, "The replica exchange period (%d) is not divisible by nstcalclr (%d)", repl_ex_nst, ir->nstcalclr);
- }
-
if (ir->efep != efepNO)
{
/* Set free energy calculation frequency as the greatest common
- * denominator of nstdhdl and repl_ex_nst.
- * Check for nstcalclr with twin-range, since we need the long-range
- * contribution to the free-energy at the correct (nstcalclr) steps.
- */
+ * denominator of nstdhdl and repl_ex_nst. */
nstfep = ir->fepvals->nstdhdl;
if (ir->bExpanded)
{
- if (IR_TWINRANGE(*ir) &&
- ir->expandedvals->nstexpanded % ir->nstcalclr != 0)
- {
- gmx_fatal(FARGS, "nstexpanded should be divisible by nstcalclr");
- }
nstfep = gmx_greatest_common_divisor(ir->expandedvals->nstexpanded, nstfep);
}
if (repl_ex_nst > 0)
{
nstfep = gmx_greatest_common_divisor(repl_ex_nst, nstfep);
}
- /* We checked divisibility of repl_ex_nst and nstcalclr above */
- if (IR_TWINRANGE(*ir) && nstfep % ir->nstcalclr != 0)
- {
- gmx_incons("nstfep not divisible by nstcalclr");
- }
}
/* Be REALLY careful about what flags you set here. You CANNOT assume
*/
bStopCM = (ir->comm_mode != ecmNO && !ir->bContinuation);
+ if (Flags & MD_READ_EKIN)
+ {
+ restore_ekinstate_from_state(cr, ekind, &state_global->ekinstate);
+ }
+
cglo_flags = (CGLO_TEMPERATURE | CGLO_GSTAT
| (bStopCM ? CGLO_STOPCM : 0)
- | (bVV ? CGLO_PRESSURE : 0)
- | (bVV ? CGLO_CONSTRAINT : 0)
- | (bRerunMD ? CGLO_RERUNMD : 0)
+ | (EI_VV(ir->eI) ? CGLO_PRESSURE : 0)
+ | (EI_VV(ir->eI) ? CGLO_CONSTRAINT : 0)
| ((Flags & MD_READ_EKIN) ? CGLO_READEKIN : 0));
bSumEkinhOld = FALSE;
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
NULL, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
- constr, NULL, FALSE, state->box,
- top_global, &bSumEkinhOld, cglo_flags);
+ constr, &nullSignaller, state->box,
+ &totalNumberOfBondedInteractions, &bSumEkinhOld, cglo_flags
+ | (shouldCheckNumberOfBondedInteractions ? CGLO_CHECK_NUMBER_OF_BONDED_INTERACTIONS : 0));
+ checkNumberOfBondedInteractions(fplog, cr, totalNumberOfBondedInteractions,
+ top_global, top, state,
+ &shouldCheckNumberOfBondedInteractions);
if (ir->eI == eiVVAK)
{
/* a second call to get the half step temperature initialized as well */
kinetic energy calculation. This minimized excess variables, but
perhaps loses some logic?*/
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
NULL, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
- constr, NULL, FALSE, state->box,
- top_global, &bSumEkinhOld,
+ constr, &nullSignaller, state->box,
+ NULL, &bSumEkinhOld,
cglo_flags &~(CGLO_STOPCM | CGLO_PRESSURE));
}
copy_mat(ekind->tcstat[i].ekinh, ekind->tcstat[i].ekinh_old);
}
}
- if (ir->eI != eiVV)
- {
- enerd->term[F_TEMP] *= 2; /* result of averages being done over previous and current step,
- and there is no previous step */
- }
/* need to make an initiation call to get the Trotter variables set, as well as other constants for non-trotter
temperature control */
if (MASTER(cr))
{
- if (constr && !ir->bContinuation && ir->eConstrAlg == econtLINCS)
- {
- fprintf(fplog,
- "RMS relative constraint deviation after constraining: %.2e\n",
- constr_rmsd(constr, FALSE));
- }
- if (EI_STATE_VELOCITY(ir->eI))
+ if (!ir->bContinuation)
{
- fprintf(fplog, "Initial temperature: %g K\n", enerd->term[F_TEMP]);
+ if (constr && ir->eConstrAlg == econtLINCS)
+ {
+ fprintf(fplog,
+ "RMS relative constraint deviation after constraining: %.2e\n",
+ constr_rmsd(constr));
+ }
+ if (EI_STATE_VELOCITY(ir->eI))
+ {
+ real temp = enerd->term[F_TEMP];
+ if (ir->eI != eiVV)
+ {
+ /* Result of Ekin averaged over velocities of -half
+ * and +half step, while we only have -half step here.
+ */
+ temp *= 2;
+ }
+ fprintf(fplog, "Initial temperature: %g K\n", temp);
+ }
}
+
if (bRerunMD)
{
fprintf(stderr, "starting md rerun '%s', reading coordinates from"
}
#endif
- debug_gmx();
/***********************************************************
*
* Loop over MD steps
rerun_fr.natoms = 0;
if (MASTER(cr))
{
- bNotLastFrame = read_first_frame(oenv, &status,
- opt2fn("-rerun", nfile, fnm),
- &rerun_fr, TRX_NEED_X | TRX_READ_V);
+ bLastStep = !read_first_frame(oenv, &status,
+ opt2fn("-rerun", nfile, fnm),
+ &rerun_fr, TRX_NEED_X | TRX_READ_V);
if (rerun_fr.natoms != top_global->natoms)
{
gmx_fatal(FARGS,
{
gmx_fatal(FARGS, "Rerun trajectory frame step %d time %f does not contain a box, while pbc is used", rerun_fr.step, rerun_fr.time);
}
- if (max_cutoff2(ir->ePBC, rerun_fr.box) < sqr(fr->rlistlong))
+ if (max_cutoff2(ir->ePBC, rerun_fr.box) < gmx::square(fr->rlist))
{
gmx_fatal(FARGS, "Rerun trajectory frame step %d time %f has too small box dimensions", rerun_fr.step, rerun_fr.time);
}
if (PAR(cr))
{
- rerun_parallel_comm(cr, &rerun_fr, &bNotLastFrame);
+ rerun_parallel_comm(cr, &rerun_fr, &bLastStep);
}
if (ir->ePBC != epbcNONE)
/* loop over MD steps or if rerunMD to end of input trajectory */
bFirstStep = TRUE;
/* Skip the first Nose-Hoover integration when we get the state from tpx */
- bStateFromTPX = !bStateFromCP;
- bInitStep = bFirstStep && (bStateFromTPX || bVV);
- bStartingFromCpt = (Flags & MD_STARTFROMCPT) && bInitStep;
+ bInitStep = !startingFromCheckpoint || EI_VV(ir->eI);
bSumEkinhOld = FALSE;
bExchanged = FALSE;
bNeedRepartition = FALSE;
+ // TODO This implementation of ensemble orientation restraints is nasty because
+ // a user can't just do multi-sim with single-sim orientation restraints.
+ bUsingEnsembleRestraints = (fcd->disres.nsystems > 1) || (cr->ms && fcd->orires.nr);
- init_global_signals(&gs, cr, ir, repl_ex_nst);
+ {
+ // Replica exchange and ensemble restraints need all
+ // simulations to remain synchronized, so they need
+ // checkpoints and stop conditions to act on the same step, so
+ // the propagation of such signals must take place between
+ // simulations, not just within simulations.
+ bool checkpointIsLocal = (repl_ex_nst <= 0) && !bUsingEnsembleRestraints;
+ bool stopConditionIsLocal = (repl_ex_nst <= 0) && !bUsingEnsembleRestraints;
+ bool resetCountersIsLocal = true;
+ signals[eglsCHKPT] = SimulationSignal(checkpointIsLocal);
+ signals[eglsSTOPCOND] = SimulationSignal(stopConditionIsLocal);
+ signals[eglsRESETCOUNTERS] = SimulationSignal(resetCountersIsLocal);
+ }
step = ir->init_step;
step_rel = 0;
- if (MULTISIM(cr) && (repl_ex_nst <= 0 ))
- {
- /* check how many steps are left in other sims */
- multisim_nsteps = get_multisim_nsteps(cr, ir->nsteps);
- }
- if (MULTISIM(cr) && max_hours > 0)
+ // TODO extract this to new multi-simulation module
+ if (MASTER(cr) && MULTISIM(cr) && (repl_ex_nst <= 0 ))
{
- gmx_fatal(FARGS, "The combination of mdrun -maxh and mdrun -multi is not supported. Please use the nsteps .mdp field.");
+ if (!multisim_int_all_are_equal(cr->ms, ir->nsteps))
+ {
+ md_print_info(cr, fplog,
+ "Note: The number of steps is not consistent across multi simulations,\n"
+ "but we are proceeding anyway!\n");
+ }
+ if (!multisim_int_all_are_equal(cr->ms, ir->init_step))
+ {
+ md_print_info(cr, fplog,
+ "Note: The initial step is not consistent across multi simulations,\n"
+ "but we are proceeding anyway!\n");
+ }
}
/* and stop now if we should */
- bLastStep = (bRerunMD || (ir->nsteps >= 0 && step_rel > ir->nsteps) ||
- ((multisim_nsteps >= 0) && (step_rel >= multisim_nsteps )));
- while (!bLastStep || (bRerunMD && bNotLastFrame))
+ bLastStep = (bLastStep || (ir->nsteps >= 0 && step_rel > ir->nsteps));
+ while (!bLastStep)
{
/* Determine if this is a neighbor search step */
pme_loadbal_do(pme_loadbal, cr,
(bVerbose && MASTER(cr)) ? stderr : NULL,
fplog,
- ir, fr, state, wcycle,
+ ir, fr, state,
+ wcycle,
step, step_rel,
&bPMETunePrinting);
}
t = t0 + step*ir->delta_t;
}
+ // TODO Refactor this, so that nstfep does not need a default value of zero
if (ir->efep != efepNO || ir->bSimTemp)
{
/* find and set the current lambdas. If rerunning, we either read in a state, or a lambda value,
bDoDHDL = do_per_step(step, ir->fepvals->nstdhdl);
bDoFEP = ((ir->efep != efepNO) && do_per_step(step, nstfep));
bDoExpanded = (do_per_step(step, ir->expandedvals->nstexpanded)
- && (ir->bExpanded) && (step > 0) && (!bStartingFromCpt));
+ && (ir->bExpanded) && (step > 0) && (!startingFromCheckpoint));
}
bDoReplEx = ((repl_ex_nst > 0) && (step > 0) && !bLastStep &&
if (bSimAnn)
{
- update_annealing_target_temp(&(ir->opts), t);
+ update_annealing_target_temp(ir, t, upd);
}
if (bRerunMD)
{
/* for rerun MD always do Neighbour Searching */
bNS = (bFirstStep || ir->nstlist != 0);
- bNStList = bNS;
}
else
{
- /* Determine whether or not to do Neighbour Searching and LR */
+ /* Determine whether or not to do Neighbour Searching */
bNS = (bFirstStep || bNStList || bExchanged || bNeedRepartition);
}
- /* check whether we should stop because another simulation has
- stopped. */
- if (MULTISIM(cr))
- {
- if ( (multisim_nsteps >= 0) && (step_rel >= multisim_nsteps) &&
- (multisim_nsteps != ir->nsteps) )
- {
- if (bNS)
- {
- if (MASTER(cr))
- {
- fprintf(stderr,
- "Stopping simulation %d because another one has finished\n",
- cr->ms->sim);
- }
- bLastStep = TRUE;
- gs.sig[eglsCHKPT] = 1;
- }
- }
- }
-
/* < 0 means stop at next step, > 0 means stop at next NS step */
- if ( (gs.set[eglsSTOPCOND] < 0) ||
- ( (gs.set[eglsSTOPCOND] > 0) && (bNStList || ir->nstlist == 0) ) )
+ if ( (signals[eglsSTOPCOND].set < 0) ||
+ ( (signals[eglsSTOPCOND].set > 0 ) && ( bNS || ir->nstlist == 0)))
{
bLastStep = TRUE;
}
* Note that the || bLastStep can result in non-exact continuation
* beyond the last step. But we don't consider that to be an issue.
*/
- do_log = do_per_step(step, ir->nstlog) || (bFirstStep && !bStateFromCP) || bLastStep;
+ do_log = do_per_step(step, ir->nstlog) || (bFirstStep && !startingFromCheckpoint) || bLastStep || bRerunMD;
do_verbose = bVerbose &&
- (step % stepout == 0 || bFirstStep || bLastStep);
+ (step % stepout == 0 || bFirstStep || bLastStep || bRerunMD);
if (bNS && !(bFirstStep && ir->bContinuation && !bRerunMD))
{
{
bMasterState = FALSE;
/* Correct the new box if it is too skewed */
- if (DYNAMIC_BOX(*ir))
+ if (inputrecDynamicBox(ir))
{
if (correct_box(fplog, step, state->box, graph))
{
bMasterState, nstglobalcomm,
state_global, top_global, ir,
state, &f, mdatoms, top, fr,
- vsite, shellfc, constr,
+ vsite, constr,
nrnb, wcycle,
do_verbose && !bPMETunePrinting);
+ shouldCheckNumberOfBondedInteractions = true;
+ update_realloc(upd, state->nalloc);
}
}
if (MASTER(cr) && do_log)
{
- print_ebin_header(fplog, step, t, state->lambda[efptFEP]); /* can we improve the information printed here? */
+ print_ebin_header(fplog, step, t); /* can we improve the information printed here? */
}
if (ir->efep != efepNO)
/* We need the kinetic energy at minus the half step for determining
* the full step kinetic energy and possibly for T-coupling.*/
/* This may not be quite working correctly yet . . . . */
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
wcycle, enerd, NULL, NULL, NULL, NULL, mu_tot,
- constr, NULL, FALSE, state->box,
- top_global, &bSumEkinhOld,
- CGLO_RERUNMD | CGLO_GSTAT | CGLO_TEMPERATURE);
+ constr, &nullSignaller, state->box,
+ &totalNumberOfBondedInteractions, &bSumEkinhOld,
+ CGLO_GSTAT | CGLO_TEMPERATURE | CGLO_CHECK_NUMBER_OF_BONDED_INTERACTIONS);
+ checkNumberOfBondedInteractions(fplog, cr, totalNumberOfBondedInteractions,
+ top_global, top, state,
+ &shouldCheckNumberOfBondedInteractions);
}
clear_mat(force_vir);
* or at the last step (but not when we do not want confout),
* but never at the first step or with rerun.
*/
- bCPT = (((gs.set[eglsCHKPT] && (bNS || ir->nstlist == 0)) ||
+ bCPT = (((signals[eglsCHKPT].set && (bNS || ir->nstlist == 0)) ||
(bLastStep && (Flags & MD_CONFOUT))) &&
step > ir->init_step && !bRerunMD);
if (bCPT)
{
- gs.set[eglsCHKPT] = 0;
+ signals[eglsCHKPT].set = 0;
}
/* Determine the energy and pressure:
}
bCalcEner = bCalcEnerStep;
- do_ene = (do_per_step(step, ir->nstenergy) || bLastStep);
+ do_ene = (do_per_step(step, ir->nstenergy) || bLastStep || bRerunMD);
if (do_ene || do_log || bDoReplEx)
{
/* Do we need global communication ? */
bGStat = (bCalcVir || bCalcEner || bStopCM ||
do_per_step(step, nstglobalcomm) ||
- (bVV && IR_NVT_TROTTER(ir) && do_per_step(step-1, nstglobalcomm)));
-
- /* these CGLO_ options remain the same throughout the iteration */
- cglo_flags = ((bRerunMD ? CGLO_RERUNMD : 0) |
- (bGStat ? CGLO_GSTAT : 0)
- );
+ (EI_VV(ir->eI) && inputrecNvtTrotter(ir) && do_per_step(step-1, nstglobalcomm)));
force_flags = (GMX_FORCE_STATECHANGED |
- ((DYNAMIC_BOX(*ir) || bRerunMD) ? GMX_FORCE_DYNAMICBOX : 0) |
+ ((inputrecDynamicBox(ir) || bRerunMD) ? GMX_FORCE_DYNAMICBOX : 0) |
GMX_FORCE_ALLFORCES |
- GMX_FORCE_SEPLRF |
(bCalcVir ? GMX_FORCE_VIRIAL : 0) |
(bCalcEner ? GMX_FORCE_ENERGY : 0) |
(bDoFEP ? GMX_FORCE_DHDL : 0)
);
- if (fr->bTwinRange)
- {
- if (do_per_step(step, ir->nstcalclr))
- {
- force_flags |= GMX_FORCE_DO_LR;
- }
- }
-
if (shellfc)
{
/* Now is the time to relax the shells */
- count = relax_shell_flexcon(fplog, cr, bVerbose, step,
- ir, bNS, force_flags,
- top,
- constr, enerd, fcd,
- state, f, force_vir, mdatoms,
- nrnb, wcycle, graph, groups,
- shellfc, fr, bBornRadii, t, mu_tot,
- &bConverged, vsite,
- mdoutf_get_fp_field(outf));
- tcount += count;
-
- if (bConverged)
- {
- nconverged++;
- }
+ relax_shell_flexcon(fplog, cr, bVerbose, step,
+ ir, bNS, force_flags, top,
+ constr, enerd, fcd,
+ state, f, force_vir, mdatoms,
+ nrnb, wcycle, graph, groups,
+ shellfc, fr, bBornRadii, t, mu_tot,
+ vsite, mdoutf_get_fp_field(outf));
}
else
{
(bNS ? GMX_FORCE_NS : 0) | force_flags);
}
- if (bVV && !bStartingFromCpt && !bRerunMD)
+ if (EI_VV(ir->eI) && !startingFromCheckpoint && !bRerunMD)
/* ############### START FIRST UPDATE HALF-STEP FOR VV METHODS############### */
{
rvec *vbuf = NULL;
trotter_update(ir, step, ekind, enerd, state, total_vir, mdatoms, &MassQ, trotter_seq, ettTSEQ1);
}
- /* If we are using twin-range interactions where the long-range component
- * is only evaluated every nstcalclr>1 steps, we should do a special update
- * step to combine the long-range forces on these steps.
- * For nstcalclr=1 this is not done, since the forces would have been added
- * directly to the short-range forces already.
- *
- * TODO Remove various aspects of VV+twin-range in master
- * branch, because VV integrators did not ever support
- * twin-range multiple time stepping with constraints.
- */
- bUpdateDoLR = (fr->bTwinRange && do_per_step(step, ir->nstcalclr));
-
- update_coords(fplog, step, ir, mdatoms, state, fr->bMolPBC,
- f, bUpdateDoLR, fr->f_twin, bCalcVir ? &fr->vir_twin_constr : NULL, fcd,
- ekind, M, upd, bInitStep, etrtVELOCITY1,
- cr, nrnb, constr, &top->idef);
+ update_coords(fplog, step, ir, mdatoms, state, f, fcd,
+ ekind, M, upd, etrtVELOCITY1,
+ cr, constr);
if (!bRerunMD || rerun_fr.bV || bForceUpdate) /* Why is rerun_fr.bV here? Unclear. */
{
cr, nrnb, wcycle, upd, constr,
TRUE, bCalcVir);
wallcycle_start(wcycle, ewcUPDATE);
- if (bCalcVir && bUpdateDoLR && ir->nstcalclr > 1)
- {
- /* Correct the virial for multiple time stepping */
- m_sub(shake_vir, fr->vir_twin_constr, shake_vir);
- }
}
else if (graph)
{
* Think about ways around this in the future?
* For now, keep this choice in comments.
*/
- /*bPres = (ir->eI==eiVV || IR_NPT_TROTTER(ir)); */
- /*bTemp = ((ir->eI==eiVV &&(!bInitStep)) || (ir->eI==eiVVAK && IR_NPT_TROTTER(ir)));*/
+ /*bPres = (ir->eI==eiVV || inputrecNptTrotter(ir)); */
+ /*bTemp = ((ir->eI==eiVV &&(!bInitStep)) || (ir->eI==eiVVAK && inputrecNptTrotter(ir)));*/
bPres = TRUE;
bTemp = ((ir->eI == eiVV && (!bInitStep)) || (ir->eI == eiVVAK));
if (bCalcEner && ir->eI == eiVVAK)
if (bGStat || do_per_step(step-1, nstglobalcomm))
{
wallcycle_stop(wcycle, ewcUPDATE);
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
wcycle, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
- constr, NULL, FALSE, state->box,
- top_global, &bSumEkinhOld,
- cglo_flags
+ constr, &nullSignaller, state->box,
+ &totalNumberOfBondedInteractions, &bSumEkinhOld,
+ (bGStat ? CGLO_GSTAT : 0)
| CGLO_ENERGY
| (bTemp ? CGLO_TEMPERATURE : 0)
| (bPres ? CGLO_PRESSURE : 0)
| (bPres ? CGLO_CONSTRAINT : 0)
| (bStopCM ? CGLO_STOPCM : 0)
+ | (shouldCheckNumberOfBondedInteractions ? CGLO_CHECK_NUMBER_OF_BONDED_INTERACTIONS : 0)
| CGLO_SCALEEKIN
);
/* explanation of above:
time step kinetic energy for the pressure (always true now, since we want accurate statistics).
b) If we are using EkinAveEkin for the kinetic energy for the temperature control, we still feed in
EkinAveVel because it's needed for the pressure */
+ checkNumberOfBondedInteractions(fplog, cr, totalNumberOfBondedInteractions,
+ top_global, top, state,
+ &shouldCheckNumberOfBondedInteractions);
wallcycle_start(wcycle, ewcUPDATE);
}
/* temperature scaling and pressure scaling to produce the extended variables at t+dt */
{
m_add(force_vir, shake_vir, total_vir); /* we need the un-dispersion corrected total vir here */
trotter_update(ir, step, ekind, enerd, state, total_vir, mdatoms, &MassQ, trotter_seq, ettTSEQ2);
- }
- else
- {
- if (bExchanged)
+
+ /* TODO This is only needed when we're about to write
+ * a checkpoint, because we use it after the restart
+ * (in a kludge?). But what should we be doing if
+ * startingFromCheckpoint or bInitStep are true? */
+ if (inputrecNptTrotter(ir) || inputrecNphTrotter(ir))
{
- wallcycle_stop(wcycle, ewcUPDATE);
- /* We need the kinetic energy at minus the half step for determining
- * the full step kinetic energy and possibly for T-coupling.*/
- /* This may not be quite working correctly yet . . . . */
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
- wcycle, enerd, NULL, NULL, NULL, NULL, mu_tot,
- constr, NULL, FALSE, state->box,
- top_global, &bSumEkinhOld,
- CGLO_RERUNMD | CGLO_GSTAT | CGLO_TEMPERATURE);
- wallcycle_start(wcycle, ewcUPDATE);
+ copy_mat(shake_vir, state->svir_prev);
+ copy_mat(force_vir, state->fvir_prev);
+ }
+ if (inputrecNvtTrotter(ir) && ir->eI == eiVV)
+ {
+ /* update temperature and kinetic energy now that step is over - this is the v(t+dt) point */
+ enerd->term[F_TEMP] = sum_ekin(&(ir->opts), ekind, NULL, (ir->eI == eiVV), FALSE);
+ enerd->term[F_EKIN] = trace(ekind->ekin);
}
}
- }
- if (bTrotter && !bInitStep)
- {
- /* TODO This is only needed when we're about to write
- * a checkpoint, because we use it after the restart
- * (in a kludge?). But what should we be doing if
- * startingFromCheckpoint or bInitStep are true? */
- if (IR_NPT_TROTTER(ir) || IR_NPH_TROTTER(ir))
- {
- copy_mat(shake_vir, state->svir_prev);
- copy_mat(force_vir, state->fvir_prev);
- }
- if (IR_NVT_TROTTER(ir) && ir->eI == eiVV)
+ else if (bExchanged)
{
- /* update temperature and kinetic energy now that step is over - this is the v(t+dt) point */
- enerd->term[F_TEMP] = sum_ekin(&(ir->opts), ekind, NULL, (ir->eI == eiVV), FALSE);
- enerd->term[F_EKIN] = trace(ekind->ekin);
+ wallcycle_stop(wcycle, ewcUPDATE);
+ /* We need the kinetic energy at minus the half step for determining
+ * the full step kinetic energy and possibly for T-coupling.*/
+ /* This may not be quite working correctly yet . . . . */
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
+ wcycle, enerd, NULL, NULL, NULL, NULL, mu_tot,
+ constr, &nullSignaller, state->box,
+ NULL, &bSumEkinhOld,
+ CGLO_GSTAT | CGLO_TEMPERATURE);
+ wallcycle_start(wcycle, ewcUPDATE);
}
}
/* if it's the initial step, we performed this first step just to get the constraint virial */
}
/* compute the conserved quantity */
- if (bVV)
+ if (EI_VV(ir->eI))
{
saved_conserved_quantity = compute_conserved_from_auxiliary(ir, state, &MassQ);
if (ir->eI == eiVV)
*/
do_md_trajectory_writing(fplog, cr, nfile, fnm, step, step_rel, t,
ir, state, state_global, top_global, fr,
- outf, mdebin, ekind, f, f_global,
+ outf, mdebin, ekind, f,
&nchkpt,
bCPT, bRerunMD, bLastStep, (Flags & MD_CONFOUT),
bSumEkinhOld);
bIMDstep = do_IMD(ir->bIMD, step, cr, bNS, state->box, state->x, ir, t, wcycle);
/* kludge -- virial is lost with restart for MTTK NPT control. Must reload (saved earlier). */
- if (bStartingFromCpt && (IR_NPT_TROTTER(ir) || IR_NPH_TROTTER(ir)))
+ if (startingFromCheckpoint && (inputrecNptTrotter(ir) || inputrecNphTrotter(ir)))
{
copy_mat(state->svir_prev, shake_vir);
copy_mat(state->fvir_prev, force_vir);
/* Check whether everything is still allright */
if (((int)gmx_get_stop_condition() > handled_stop_condition)
-#ifdef GMX_THREAD_MPI
+#if GMX_THREAD_MPI
&& MASTER(cr)
#endif
)
{
- /* this is just make gs.sig compatible with the hack
- of sending signals around by MPI_Reduce with together with
+ int nsteps_stop = -1;
+
+ /* this just makes signals[].sig compatible with the hack
+ of sending signals around by MPI_Reduce together with
other floats */
if (gmx_get_stop_condition() == gmx_stop_cond_next_ns)
{
- gs.sig[eglsSTOPCOND] = 1;
+ signals[eglsSTOPCOND].sig = 1;
+ nsteps_stop = std::max(ir->nstlist, 2*nstglobalcomm);
}
if (gmx_get_stop_condition() == gmx_stop_cond_next)
{
- gs.sig[eglsSTOPCOND] = -1;
+ signals[eglsSTOPCOND].sig = -1;
+ nsteps_stop = nstglobalcomm + 1;
}
- /* < 0 means stop at next step, > 0 means stop at next NS step */
if (fplog)
{
fprintf(fplog,
- "\n\nReceived the %s signal, stopping at the next %sstep\n\n",
- gmx_get_signal_name(),
- gs.sig[eglsSTOPCOND] == 1 ? "NS " : "");
+ "\n\nReceived the %s signal, stopping within %d steps\n\n",
+ gmx_get_signal_name(), nsteps_stop);
fflush(fplog);
}
fprintf(stderr,
- "\n\nReceived the %s signal, stopping at the next %sstep\n\n",
- gmx_get_signal_name(),
- gs.sig[eglsSTOPCOND] == 1 ? "NS " : "");
+ "\n\nReceived the %s signal, stopping within %d steps\n\n",
+ gmx_get_signal_name(), nsteps_stop);
fflush(stderr);
handled_stop_condition = (int)gmx_get_stop_condition();
}
else if (MASTER(cr) && (bNS || ir->nstlist <= 0) &&
(max_hours > 0 && elapsed_time > max_hours*60.0*60.0*0.99) &&
- gs.sig[eglsSTOPCOND] == 0 && gs.set[eglsSTOPCOND] == 0)
+ signals[eglsSTOPCOND].sig == 0 && signals[eglsSTOPCOND].set == 0)
{
/* Signal to terminate the run */
- gs.sig[eglsSTOPCOND] = 1;
+ signals[eglsSTOPCOND].sig = 1;
if (fplog)
{
fprintf(fplog, "\nStep %s: Run time exceeded %.3f hours, will terminate the run\n", gmx_step_str(step, sbuf), max_hours*0.99);
elapsed_time > max_hours*60.0*60.0*0.495)
{
/* Set flag that will communicate the signal to all ranks in the simulation */
- gs.sig[eglsRESETCOUNTERS] = 1;
+ signals[eglsRESETCOUNTERS].sig = 1;
}
/* In parallel we only have to check for checkpointing in steps
cpt_period >= 0 &&
(cpt_period == 0 ||
elapsed_time >= nchkpt*cpt_period*60.0)) &&
- gs.set[eglsCHKPT] == 0)
+ signals[eglsCHKPT].set == 0)
{
- gs.sig[eglsCHKPT] = 1;
+ signals[eglsCHKPT].sig = 1;
}
+ /* ######### START SECOND UPDATE STEP ################# */
+
/* at the start of step, randomize or scale the velocities ((if vv. Restriction of Andersen controlled
in preprocessing */
TRUE, bCalcVir);
}
}
- /* ######### START SECOND UPDATE STEP ################# */
/* Box is changed in update() when we do pressure coupling,
* but we should still use the old box for energy corrections and when
* writing it to the energy file, so it matches the trajectory files for
update_pcouple(fplog, step, ir, state, pcoupl_mu, M, bInitStep);
}
- if (bVV)
+ if (EI_VV(ir->eI))
{
- bUpdateDoLR = (fr->bTwinRange && do_per_step(step, ir->nstcalclr));
-
/* velocity half-step update */
- update_coords(fplog, step, ir, mdatoms, state, fr->bMolPBC, f,
- bUpdateDoLR, fr->f_twin, bCalcVir ? &fr->vir_twin_constr : NULL, fcd,
- ekind, M, upd, FALSE, etrtVELOCITY2,
- cr, nrnb, constr, &top->idef);
+ update_coords(fplog, step, ir, mdatoms, state, f, fcd,
+ ekind, M, upd, etrtVELOCITY2,
+ cr, constr);
}
/* Above, initialize just copies ekinh into ekin,
}
copy_rvecn(state->x, cbuf, 0, state->natoms);
}
- bUpdateDoLR = (fr->bTwinRange && do_per_step(step, ir->nstcalclr));
- update_coords(fplog, step, ir, mdatoms, state, fr->bMolPBC, f,
- bUpdateDoLR, fr->f_twin, bCalcVir ? &fr->vir_twin_constr : NULL, fcd,
- ekind, M, upd, bInitStep, etrtPOSITION, cr, nrnb, constr, &top->idef);
+ update_coords(fplog, step, ir, mdatoms, state, f, fcd,
+ ekind, M, upd, etrtPOSITION, cr, constr);
wallcycle_stop(wcycle, ewcUPDATE);
update_constraints(fplog, step, &dvdl_constr, ir, mdatoms, state,
cr, nrnb, wcycle, upd, constr,
FALSE, bCalcVir);
- if (bCalcVir && bUpdateDoLR && ir->nstcalclr > 1)
- {
- /* Correct the virial for multiple time stepping */
- m_sub(shake_vir, fr->vir_twin_constr, shake_vir);
- }
-
if (ir->eI == eiVVAK)
{
/* erase F_EKIN and F_TEMP here? */
/* just compute the kinetic energy at the half step to perform a trotter step */
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
wcycle, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
- constr, NULL, FALSE, lastbox,
- top_global, &bSumEkinhOld,
- cglo_flags | CGLO_TEMPERATURE
+ constr, &nullSignaller, lastbox,
+ NULL, &bSumEkinhOld,
+ (bGStat ? CGLO_GSTAT : 0) | CGLO_TEMPERATURE
);
wallcycle_start(wcycle, ewcUPDATE);
trotter_update(ir, step, ekind, enerd, state, total_vir, mdatoms, &MassQ, trotter_seq, ettTSEQ4);
/* now we know the scaling, we can compute the positions again again */
copy_rvecn(cbuf, state->x, 0, state->natoms);
- bUpdateDoLR = (fr->bTwinRange && do_per_step(step, ir->nstcalclr));
-
- update_coords(fplog, step, ir, mdatoms, state, fr->bMolPBC, f,
- bUpdateDoLR, fr->f_twin, bCalcVir ? &fr->vir_twin_constr : NULL, fcd,
- ekind, M, upd, bInitStep, etrtPOSITION, cr, nrnb, constr, &top->idef);
+ update_coords(fplog, step, ir, mdatoms, state, f, fcd,
+ ekind, M, upd, etrtPOSITION, cr, constr);
wallcycle_stop(wcycle, ewcUPDATE);
/* do we need an extra constraint here? just need to copy out of state->v to upd->xp? */
cr, nrnb, wcycle, upd, NULL,
FALSE, bCalcVir);
}
- if (bVV)
+ if (EI_VV(ir->eI))
{
/* this factor or 2 correction is necessary
because half of the constraint force is removed
* non-communication steps, but we need to calculate
* the kinetic energy one step before communication.
*/
- if (bGStat || (!EI_VV(ir->eI) && do_per_step(step+1, nstglobalcomm)))
- {
- compute_globals(fplog, gstat, cr, ir, fr, ekind, state, state_global, mdatoms, nrnb, vcm,
- wcycle, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
- constr, &gs,
- (step_rel % gs.nstms == 0) &&
- (multisim_nsteps < 0 || (step_rel < multisim_nsteps)),
- lastbox,
- top_global, &bSumEkinhOld,
- cglo_flags
- | (!EI_VV(ir->eI) || bRerunMD ? CGLO_ENERGY : 0)
- | (!EI_VV(ir->eI) && bStopCM ? CGLO_STOPCM : 0)
- | (!EI_VV(ir->eI) ? CGLO_TEMPERATURE : 0)
- | (!EI_VV(ir->eI) || bRerunMD ? CGLO_PRESSURE : 0)
- | CGLO_CONSTRAINT
- );
+ {
+ // Organize to do inter-simulation signalling on steps if
+ // and when algorithms require it.
+ bool doInterSimSignal = (!bFirstStep && bDoReplEx) || bUsingEnsembleRestraints;
+
+ if (bGStat || (!EI_VV(ir->eI) && do_per_step(step+1, nstglobalcomm)) || doInterSimSignal)
+ {
+ // Since we're already communicating at this step, we
+ // can propagate intra-simulation signals. Note that
+ // check_nstglobalcomm has the responsibility for
+ // choosing the value of nstglobalcomm that is one way
+ // bGStat becomes true, so we can't get into a
+ // situation where e.g. checkpointing can't be
+ // signalled.
+ bool doIntraSimSignal = true;
+ SimulationSignaller signaller(&signals, cr, doInterSimSignal, doIntraSimSignal);
+
+ compute_globals(fplog, gstat, cr, ir, fr, ekind, state, mdatoms, nrnb, vcm,
+ wcycle, enerd, force_vir, shake_vir, total_vir, pres, mu_tot,
+ constr, &signaller,
+ lastbox,
+ &totalNumberOfBondedInteractions, &bSumEkinhOld,
+ (bGStat ? CGLO_GSTAT : 0)
+ | (!EI_VV(ir->eI) || bRerunMD ? CGLO_ENERGY : 0)
+ | (!EI_VV(ir->eI) && bStopCM ? CGLO_STOPCM : 0)
+ | (!EI_VV(ir->eI) ? CGLO_TEMPERATURE : 0)
+ | (!EI_VV(ir->eI) || bRerunMD ? CGLO_PRESSURE : 0)
+ | CGLO_CONSTRAINT
+ | (shouldCheckNumberOfBondedInteractions ? CGLO_CHECK_NUMBER_OF_BONDED_INTERACTIONS : 0)
+ );
+ checkNumberOfBondedInteractions(fplog, cr, totalNumberOfBondedInteractions,
+ top_global, top, state,
+ &shouldCheckNumberOfBondedInteractions);
+ }
}
/* ############# END CALC EKIN AND PRESSURE ################# */
but what we actually need entering the new cycle is the new shake_vir value. Ideally, we could
generate the new shake_vir, but test the veta value for convergence. This will take some thought. */
- if (ir->efep != efepNO && (!bVV || bRerunMD))
+ if (ir->efep != efepNO && (!EI_VV(ir->eI) || bRerunMD))
{
/* Sum up the foreign energy and dhdl terms for md and sd.
Currently done every step so that dhdl is correct in the .edr */
}
enerd->term[F_ETOT] = enerd->term[F_EPOT] + enerd->term[F_EKIN];
- if (bVV)
+ if (EI_VV(ir->eI))
{
enerd->term[F_ECONSERVED] = enerd->term[F_ETOT] + saved_conserved_quantity;
}
print_ebin(mdoutf_get_fp_ene(outf), do_ene, do_dr, do_or, do_log ? fplog : NULL,
step, t,
- eprNORMAL, bCompact, mdebin, fcd, groups, &(ir->opts));
+ eprNORMAL, mdebin, fcd, groups, &(ir->opts));
if (ir->bPull)
{
dd_partition_system(fplog, step, cr, TRUE, 1,
state_global, top_global, ir,
state, &f, mdatoms, top, fr,
- vsite, shellfc, constr,
+ vsite, constr,
nrnb, wcycle, FALSE);
+ shouldCheckNumberOfBondedInteractions = true;
+ update_realloc(upd, state->nalloc);
}
- bFirstStep = FALSE;
- bInitStep = FALSE;
- bStartingFromCpt = FALSE;
+ bFirstStep = FALSE;
+ bInitStep = FALSE;
+ startingFromCheckpoint = FALSE;
/* ####### SET VARIABLES FOR NEXT ITERATION IF THEY STILL NEED IT ###### */
/* With all integrators, except VV, we need to retain the pressure
if (MASTER(cr))
{
/* read next frame from input trajectory */
- bNotLastFrame = read_next_frame(oenv, status, &rerun_fr);
+ bLastStep = !read_next_frame(oenv, status, &rerun_fr);
}
if (PAR(cr))
{
- rerun_parallel_comm(cr, &rerun_fr, &bNotLastFrame);
+ rerun_parallel_comm(cr, &rerun_fr, &bLastStep);
}
}
/* If it is time to reset counters, set a flag that remains
true until counters actually get reset */
if (step_rel == wcycle_get_reset_counters(wcycle) ||
- gs.set[eglsRESETCOUNTERS] != 0)
+ signals[eglsRESETCOUNTERS].set != 0)
{
if (pme_loadbal_is_active(pme_loadbal))
{
/* Correct max_hours for the elapsed time */
max_hours -= elapsed_time/(60.0*60.0);
/* If mdrun -maxh -resethway was active, it can only trigger once */
- bResetCountersHalfMaxH = FALSE; /* TODO move this to where gs.sig[eglsRESETCOUNTERS] is set */
+ bResetCountersHalfMaxH = FALSE; /* TODO move this to where signals[eglsRESETCOUNTERS].sig is set */
/* Reset can only happen once, so clear the triggering flag. */
- gs.set[eglsRESETCOUNTERS] = 0;
+ signals[eglsRESETCOUNTERS].set = 0;
}
/* If bIMD is TRUE, the master updates the IMD energy record and sends positions to VMD client */
}
/* End of main MD loop */
- debug_gmx();
/* Closing TNG files can include compressing data. Therefore it is good to do that
* before stopping the time measurements. */
if (ir->nstcalcenergy > 0 && !bRerunMD)
{
print_ebin(mdoutf_get_fp_ene(outf), FALSE, FALSE, FALSE, fplog, step, t,
- eprAVER, FALSE, mdebin, fcd, groups, &(ir->opts));
+ eprAVER, mdebin, fcd, groups, &(ir->opts));
}
}
done_mdoutf(outf);
- debug_gmx();
if (bPMETune)
{
pme_loadbal_done(pme_loadbal, cr, fplog, use_GPU(fr->nbv));
}
- if (shellfc && fplog)
- {
- fprintf(fplog, "Fraction of iterations that converged: %.2f %%\n",
- (nconverged*100.0)/step_rel);
- fprintf(fplog, "Average number of force evaluations per MD step: %.2f\n\n",
- tcount/step_rel);
- }
+ done_shellfc(fplog, shellfc, step_rel);
if (repl_ex_nst > 0 && MASTER(cr))
{
print_replica_exchange_statistics(fplog, repl_ex);
}
+ // Clean up swapcoords
+ if (ir->eSwapCoords != eswapNO)
+ {
+ finish_swapcoords(ir->swap);
+ }
+
/* IMD cleanup, if bIMD is TRUE. */
IMD_finalize(ir->bIMD, ir->imd);
walltime_accounting_set_nsteps_done(walltime_accounting, step_rel);
+ if (step_rel >= wcycle_get_reset_counters(wcycle) &&
+ signals[eglsRESETCOUNTERS].set == 0 &&
+ !bResetCountersHalfMaxH)
+ {
+ walltime_accounting_set_valid_finish(walltime_accounting);
+ }
return 0;
}
*
* Copyright (c) 1991-2000, University of Groningen, The Netherlands.
* Copyright (c) 2001-2004, The GROMACS development team.
- * Copyright (c) 2011,2012,2013,2014,2015,2016, by the GROMACS development team, led by
+ * Copyright (c) 2011,2012,2013,2014,2015,2016,2017, by the GROMACS development team, led by
* Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
* and including many others, as listed in the AUTHORS file in the
* top-level source directory and at http://www.gromacs.org.
#include <stdio.h>
#include <string.h>
+#include "gromacs/commandline/filenm.h"
#include "gromacs/commandline/pargs.h"
-#include "gromacs/fileio/filenm.h"
-#include "gromacs/legacyheaders/macros.h"
-#include "gromacs/legacyheaders/main.h"
-#include "gromacs/legacyheaders/mdrun.h"
-#include "gromacs/legacyheaders/network.h"
-#include "gromacs/legacyheaders/readinp.h"
-#include "gromacs/legacyheaders/typedefs.h"
-#include "gromacs/legacyheaders/types/commrec.h"
+#include "gromacs/fileio/readinp.h"
+#include "gromacs/gmxlib/network.h"
+#include "gromacs/mdlib/main.h"
+#include "gromacs/mdlib/mdrun.h"
#include "gromacs/mdrunutility/handlerestart.h"
+#include "gromacs/mdtypes/commrec.h"
+#include "gromacs/utility/arraysize.h"
#include "gromacs/utility/fatalerror.h"
#include "mdrun_main.h"
+#include "runner.h"
/*! \brief Return whether either of the command-line parameters that
* will trigger a multi-simulation is set */
"terminated only when the time limit set by [TT]-maxh[tt] is reached (if any)"
"or upon receiving a signal."
"[PAR]",
- "When [TT]mdrun[tt] receives a TERM signal, it will set nsteps to the current",
- "step plus one. When [TT]mdrun[tt] receives an INT signal (e.g. when ctrl+C is",
- "pressed), it will stop after the next neighbor search step ",
- "(with nstlist=0 at the next step).",
+ "When [TT]mdrun[tt] receives a TERM signal, it will stop as soon as",
+ "checkpoint file can be written, i.e. after the next global communication step.",
+ "When [TT]mdrun[tt] receives an INT signal (e.g. when ctrl+C is",
+ "pressed), it will stop at the next neighbor search step or at the",
+ "second global communication step, whichever happens later.",
"In both cases all the usual output will be written to file.",
"When running with MPI, a signal to one of the [TT]mdrun[tt] ranks",
"is sufficient, this signal should not be sent to mpirun or",
"the [TT]mdrun[tt] process that is the parent of the others.",
"[PAR]",
"Interactive molecular dynamics (IMD) can be activated by using at least one",
- "of the three IMD switches: The [TT]-imdterm[tt] switch allows to terminate",
+ "of the three IMD switches: The [TT]-imdterm[tt] switch allows one to terminate",
"the simulation from the molecular viewer (e.g. VMD). With [TT]-imdwait[tt],",
"[TT]mdrun[tt] pauses whenever no IMD client is connected. Pulling from the",
"IMD remote can be turned on by [TT]-imdpull[tt].",
{ efXVG, "-dhdl", "dhdl", ffOPTWR },
{ efXVG, "-field", "field", ffOPTWR },
{ efXVG, "-table", "table", ffOPTRD },
- { efXVG, "-tabletf", "tabletf", ffOPTRD },
{ efXVG, "-tablep", "tablep", ffOPTRD },
{ efXVG, "-tableb", "table", ffOPTRDMULT },
{ efTRX, "-rerun", "rerun", ffOPTRD },
{ efLOG, "-rs", "rotslabs", ffOPTWR },
{ efLOG, "-rt", "rottorque", ffOPTWR },
{ efMTX, "-mtx", "nm", ffOPTWR },
- { efNDX, "-dn", "dipole", ffOPTWR },
{ efRND, "-multidir", NULL, ffOPTRDMULT},
{ efDAT, "-membed", "membed", ffOPTRD },
{ efTOP, "-mp", "membed", ffOPTRD },
const int NFILE = asize(fnm);
/* Command line options ! */
- gmx_bool bDDBondCheck = TRUE;
- gmx_bool bDDBondComm = TRUE;
- gmx_bool bTunePME = TRUE;
- gmx_bool bVerbose = FALSE;
- gmx_bool bCompact = TRUE;
- gmx_bool bRerunVSite = FALSE;
- gmx_bool bConfout = TRUE;
- gmx_bool bReproducible = FALSE;
- gmx_bool bIMDwait = FALSE;
- gmx_bool bIMDterm = FALSE;
- gmx_bool bIMDpull = FALSE;
-
- int npme = -1;
- int nstlist = 0;
- int nmultisim = 0;
- int nstglobalcomm = -1;
- int repl_ex_nst = 0;
- int repl_ex_seed = -1;
- int repl_ex_nex = 0;
- int nstepout = 100;
- int resetstep = -1;
- gmx_int64_t nsteps = -2; /* the value -2 means that the mdp option will be used */
- int imdport = 8888; /* can be almost anything, 8888 is easy to remember */
-
- rvec realddxyz = {0, 0, 0};
- const char *ddno_opt[ddnoNR+1] =
+ gmx_bool bDDBondCheck = TRUE;
+ gmx_bool bDDBondComm = TRUE;
+ gmx_bool bTunePME = TRUE;
+ gmx_bool bVerbose = FALSE;
+ gmx_bool bRerunVSite = FALSE;
+ gmx_bool bConfout = TRUE;
+ gmx_bool bReproducible = FALSE;
+ gmx_bool bIMDwait = FALSE;
+ gmx_bool bIMDterm = FALSE;
+ gmx_bool bIMDpull = FALSE;
+
+ int npme = -1;
+ int nstlist = 0;
+ int nmultisim = 0;
+ int nstglobalcomm = -1;
+ int repl_ex_nst = 0;
+ int repl_ex_seed = -1;
+ int repl_ex_nex = 0;
+ int nstepout = 100;
+ int resetstep = -1;
+ gmx_int64_t nsteps = -2; /* the value -2 means that the mdp option will be used */
+ int imdport = 8888; /* can be almost anything, 8888 is easy to remember */
+
+ rvec realddxyz = {0, 0, 0};
+ const char *ddrank_opt[ddrankorderNR+1] =
{ NULL, "interleave", "pp_pme", "cartesian", NULL };
- const char *dddlb_opt[] =
+ const char *dddlb_opt[] =
{ NULL, "auto", "no", "yes", NULL };
- const char *thread_aff_opt[threadaffNR+1] =
+ const char *thread_aff_opt[threadaffNR+1] =
{ NULL, "auto", "on", "off", NULL };
- const char *nbpu_opt[] =
+ const char *nbpu_opt[] =
{ NULL, "auto", "cpu", "gpu", "gpu_cpu", NULL };
- real rdd = 0.0, rconstr = 0.0, dlb_scale = 0.8, pforce = -1;
- char *ddcsx = NULL, *ddcsy = NULL, *ddcsz = NULL;
- real cpt_period = 15.0, max_hours = -1;
- gmx_bool bTryToAppendFiles = TRUE;
- gmx_bool bKeepAndNumCPT = FALSE;
- gmx_bool bResetCountersHalfWay = FALSE;
- output_env_t oenv = NULL;
+ real rdd = 0.0, rconstr = 0.0, dlb_scale = 0.8, pforce = -1;
+ char *ddcsx = NULL, *ddcsy = NULL, *ddcsz = NULL;
+ real cpt_period = 15.0, max_hours = -1;
+ gmx_bool bTryToAppendFiles = TRUE;
+ gmx_bool bKeepAndNumCPT = FALSE;
+ gmx_bool bResetCountersHalfWay = FALSE;
+ gmx_output_env_t *oenv = NULL;
/* Non transparent initialization of a complex gmx_hw_opt_t struct.
* But unfortunately we are not allowed to call a function here,
{ "-dd", FALSE, etRVEC, {&realddxyz},
"Domain decomposition grid, 0 is optimize" },
- { "-ddorder", FALSE, etENUM, {ddno_opt},
+ { "-ddorder", FALSE, etENUM, {ddrank_opt},
"DD rank order" },
{ "-npme", FALSE, etINT, {&npme},
"Number of separate ranks to be used for PME, -1 is guess" },
{ "-nstlist", FALSE, etINT, {&nstlist},
"Set nstlist when using a Verlet buffer tolerance (0 is guess)" },
{ "-tunepme", FALSE, etBOOL, {&bTunePME},
- "Optimize PME load between PP/PME ranks or GPU/CPU" },
+ "Optimize PME load between PP/PME ranks or GPU/CPU (only with the Verlet cut-off scheme)" },
{ "-v", FALSE, etBOOL, {&bVerbose},
"Be loud and noisy" },
- { "-compact", FALSE, etBOOL, {&bCompact},
- "Write a compact log file" },
{ "-pforce", FALSE, etREAL, {&pforce},
"Print all forces larger than this (kJ/mol nm)" },
{ "-reprod", FALSE, etBOOL, {&bReproducible},
};
unsigned long Flags;
ivec ddxyz;
- int dd_node_order;
+ int dd_rank_order;
gmx_bool bDoAppendFiles, bStartFromCpt;
FILE *fplog;
int rc;
}
- /* we set these early because they might be used in init_multisystem()
- Note that there is the potential for npme>nnodes until the number of
- threads is set later on, if there's thread parallelization. That shouldn't
- lead to problems. */
- dd_node_order = nenum(ddno_opt);
- cr->npmenodes = npme;
+ dd_rank_order = nenum(ddrank_opt);
hw_opt.thread_affinity = nenum(thread_aff_opt);
if (nmultisim >= 1)
{
-#ifndef GMX_THREAD_MPI
- gmx_bool bParFn = (multidir == NULL);
- init_multisystem(cr, nmultisim, multidir, NFILE, fnm, bParFn);
+#if !GMX_THREAD_MPI
+ init_multisystem(cr, nmultisim, multidir, NFILE, fnm);
#else
gmx_fatal(FARGS, "mdrun -multi or -multidir are not supported with the thread-MPI library. "
"Please compile GROMACS with a proper external MPI library.");
#endif
}
- handleRestart(cr, bTryToAppendFiles, NFILE, fnm,
- &bDoAppendFiles, &bStartFromCpt);
+ if (!opt2bSet("-cpi", NFILE, fnm))
+ {
+ // If we are not starting from a checkpoint we never allow files to be appended
+ // to, since that has caused a ton of strange behaviour and bugs in the past.
+ if (opt2parg_bSet("-append", asize(pa), pa))
+ {
+ // If the user explicitly used the -append option, explain that it is not possible.
+ gmx_fatal(FARGS, "GROMACS can only append to files when restarting from a checkpoint.");
+ }
+ else
+ {
+ // If the user did not say anything explicit, just disable appending.
+ bTryToAppendFiles = FALSE;
+ }
+ }
+
+ handleRestart(cr, bTryToAppendFiles, NFILE, fnm, &bDoAppendFiles, &bStartFromCpt);
Flags = opt2bSet("-rerun", NFILE, fnm) ? MD_RERUN : 0;
Flags = Flags | (bDDBondCheck ? MD_DDBONDCHECK : 0);
Flags = Flags | (bConfout ? MD_CONFOUT : 0);
Flags = Flags | (bRerunVSite ? MD_RERUN_VSITE : 0);
Flags = Flags | (bReproducible ? MD_REPRODUCIBLE : 0);
- Flags = Flags | (bDoAppendFiles ? MD_APPENDFILES : 0);
+ Flags = Flags | (bDoAppendFiles ? MD_APPENDFILES : 0);
Flags = Flags | (opt2parg_bSet("-append", asize(pa), pa) ? MD_APPENDFILESSET : 0);
Flags = Flags | (bKeepAndNumCPT ? MD_KEEPANDNUMCPT : 0);
Flags = Flags | (bStartFromCpt ? MD_STARTFROMCPT : 0);
ddxyz[YY] = (int)(realddxyz[YY] + 0.5);
ddxyz[ZZ] = (int)(realddxyz[ZZ] + 0.5);
- rc = mdrunner(&hw_opt, fplog, cr, NFILE, fnm, oenv, bVerbose, bCompact,
- nstglobalcomm, ddxyz, dd_node_order, rdd, rconstr,
- dddlb_opt[0], dlb_scale, ddcsx, ddcsy, ddcsz,
- nbpu_opt[0], nstlist,
- nsteps, nstepout, resetstep,
- nmultisim, repl_ex_nst, repl_ex_nex, repl_ex_seed,
- pforce, cpt_period, max_hours, imdport, Flags);
+ rc = gmx::mdrunner(&hw_opt, fplog, cr, NFILE, fnm, oenv, bVerbose,
+ nstglobalcomm, ddxyz, dd_rank_order, npme, rdd, rconstr,
+ dddlb_opt[0], dlb_scale, ddcsx, ddcsy, ddcsz,
+ nbpu_opt[0], nstlist,
+ nsteps, nstepout, resetstep,
+ nmultisim, repl_ex_nst, repl_ex_nex, repl_ex_seed,
+ pforce, cpt_period, max_hours, imdport, Flags);
/* Log file has to be closed in mdrunner if we are appending to it
(fplog not set here) */