# Comment line(s) preceding each configuration document the main
# intent behind that configuration, so that we can correctly judge
# whether to preserve that during maintenance decisions.
+#
+# Other configurations might coincidentally test such configurations
+# (e.g. because they are the current default), but it is appropriate
+# to intend to teach each feature (or a feature combination) exactly
+# once, and for the intent to be reflected precisely in the
+# configuration syntax, so that the configurations are stable even
+# if the defaults change in future.
# Test the mdrun-only build
# TODO In combination with gmx from another build, arrange to run regressiontests
clang-3.7 double mpi no-openmp fftpack mdrun-only
# Test MPMD PME with thread-MPI
-# TODO Add double to this configuration if/when Carsten stablizes essentialdynamics tests
+# TODO Add double to this configuration if/when we stablize the essentialdynamics tests
gcc-5 npme=1 nranks=2 no-openmp fftpack simd=avx_128_fma release
# Test non-default GMX_PREFER_STATIC_LIBS behavior
# Comment line(s) preceding each configuration document the main
# intent behind that configuration, so that we can correctly judge
# whether to preserve that during maintenance decisions.
+#
+# Other configurations might coincidentally test such configurations
+# (e.g. because they are the current default), but it is appropriate
+# to intend to teach each feature (or a feature combination) exactly
+# once, and for the intent to be reflected precisely in the
+# configuration syntax, so that the configurations are stable even
+# if the defaults change in future.
# Test older gcc
# Test oldest supported CUDA
gcc-4.6 gpu cuda-5.0 mpi npme=1 nranks=2 openmp x11 cmake-2.8.8
# Test newest gcc supported by newest CUDA shortly after the release
-# Test thread-MPI with CUDA
-# Test SIMD (AVX2_256) GPU code-path
+# Test SIMD implementation of pair search for GPU code-path
gcc-5 gpu cuda-8.0 openmp simd=avx2_256
# Test newest gcc supported by newest CUDA at time of release
# Test thread-MPI with CUDA
-gcc-4.8 gpu cuda-7.5 openmp release
+gcc-4.8 gpu thread-mpi cuda-7.5 openmp release
# Test with ThreadSanitizer
# Test AVX2_256 SIMD
gcc-4.9 tsan fftpack simd=avx2_256
# Test newest gcc at time of release
-# Test on MacOS
+# Test on MacOS (because gcc-6.1 is only available there)
gcc-6.1 double
# Test older clang
# Test double precision
# Test with AddressSanitizer
# Test without OpenMP
-clang-3.4 double no-openmp fftpack asan
+# Test thread-MPI
+clang-3.4 double thread-mpi no-openmp fftpack asan
# Test oldest supported MSVC on Windows
# Test newest supported MSVC on Windows
# Test newest cmake at time of release
# Test MKL
# Test without any MPI
-# Test on CentOS
+# Test on CentOS (because icc-16.0 is only available there)
icc-16.0 no-thread-mpi openmp mkl cmake-3.3.2 simd=avx_256
# Test AVX_128_FMA SIMD