# the research papers on the package. Check out http://www.gromacs.org.
cmake_minimum_required(VERSION 2.8.8)
+# When we require cmake >= 2.8.12, it will provide
+# CMAKE_MINIMUM_REQUIRED_VERSION automatically, but in the meantime we
+# need to set a variable, and it must have a different name.
+set(GMX_CMAKE_MINIMUM_REQUIRED_VERSION "2.8.8")
# CMake modules/macros are in a subdirectory to keep this file cleaner
# This needs to be set before project() in order to pick up toolchain files
mark_as_advanced(GMX_COOL_QUOTES)
gmx_add_cache_dependency(GMX_COOL_QUOTES BOOL "NOT GMX_FAHCORE" OFF)
-# decide on GPU settings based on user-settings and GPU/CUDA detection
+# Decide on GPU settings based on user-settings and GPU/CUDA detection.
+# We support CUDA >=v4.0 on *nix, but <= v4.1 doesn't work with MSVC
+if(MSVC)
+ set(REQUIRED_CUDA_VERSION 4.1)
+else()
+ set(REQUIRED_CUDA_VERSION 4.0)
+endif()
+set(REQUIRED_CUDA_COMPUTE_CAPABILITY 2.0)
include(gmxManageGPU)
# Detect the architecture the compiler is targetting, detect
if (NOT GMX_BUILD_MDRUN_ONLY)
add_subdirectory(doxygen)
+ add_subdirectory(install-guide)
add_subdirectory(share)
add_subdirectory(scripts)
endif()
#
# This file is part of the GROMACS molecular simulation package.
#
-# Copyright (c) 2012,2013, by the GROMACS development team, led by
+# Copyright (c) 2012,2013,2014, by the GROMACS development team, led by
# Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
# and including many others, as listed in the AUTHORS file in the
# top-level source directory and at http://www.gromacs.org.
if(CPACK_SOURCE_PACKAGE_FILE_NAME) #building source package
get_filename_component(CMAKE_BINARY_DIR ${CPACK_OUTPUT_CONFIG_FILE} PATH)
if (NOT EXISTS "${CMAKE_BINARY_DIR}/share/man/man1/gmx-view.1" OR
+ NOT EXISTS "${CMAKE_BINARY_DIR}/INSTALL" OR
NOT EXISTS "${CMAKE_BINARY_DIR}/share/html/final/online.html")
message(FATAL_ERROR
"To create a complete source package all man and HTML pages need "
- "to be generated. "
- "You need to run 'make man html' or set GMX_BUILD_HELP=ON to get "
- "them automatically built together with the binaries.")
+ "to be generated, and the INSTALL file generated. "
+ "Run 'make man html' to build these parts. You can also set "
+ "GMX_BUILD_HELP=ON to automatically build the HTML parts. "
+ "The latter also requires you to execute 'make install-guide'.")
endif()
endif()
+++ /dev/null
-%
-% This file is part of the GROMACS molecular simulation package.
-%
-% Copyright (c) 2013,2014, by the GROMACS development team, led by
-% Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
-% and including many others, as listed in the AUTHORS file in the
-% top-level source directory and at http://www.gromacs.org.
-%
-% GROMACS is free software; you can redistribute it and/or
-% modify it under the terms of the GNU Lesser General Public License
-% as published by the Free Software Foundation; either version 2.1
-% of the License, or (at your option) any later version.
-%
-% GROMACS is distributed in the hope that it will be useful,
-% but WITHOUT ANY WARRANTY; without even the implied warranty of
-% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-% Lesser General Public License for more details.
-%
-% You should have received a copy of the GNU Lesser General Public
-% License along with GROMACS; if not, see
-% http://www.gnu.org/licenses, or write to the Free Software Foundation,
-% Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-%
-% If you want to redistribute modifications to GROMACS, please
-% consider that scientific software is very special. Version
-% control is crucial - bugs must be traceable. We will be happy to
-% consider code for inclusion in the official distribution, but
-% derived work must not be called official GROMACS. Details are found
-% in the README & COPYING files - if they are missing, get the
-% official version at http://www.gromacs.org.
-%
-% To help us fund GROMACS development, we humbly ask that you cite
-% the research papers on the package. Check out http://www.gromacs.org.
-
-% Process from LaTeX via XML to XHTML with
-% latexml --destination installguide.xml --xml installguide.tex
-% latexmlpost --destination installguide.xhtml --format=xhtml installguide.xml
-%
-% Crude hack to remove ugly symbols:
-% sed -e 's/[§]//g' -i installguide.xhtml
-%
-% Strip off header for pasting into the website at
-% http://www.gromacs.org/Documentation/Installation_Instructions:
-%
-% grep -A 99999 "class=\"main\"" installguide.xhtml > installguide_web.xhtml
-
-\documentclass{article}[12pt,a4paper,twoside]
-\usepackage{hyperref}
-% haven't made these work with LaTeXML yet...
-%\usepackage[strings]{underscore}
-%\usepackage[english]{babel}
-
-\title{GROMACS installation guide}
-
-% macros to keep style uniform
-\newcommand{\gromacs}{GROMACS}
-\newcommand{\nvidia}{NVIDIA}
-\newcommand{\cuda}{CUDA}
-\newcommand{\fftw}{FFTW}
-\newcommand{\mkl}{MKL}
-\newcommand{\mpi}{MPI}
-\newcommand{\threadmpi}{ThreadMPI}
-\newcommand{\openmpi}{OpenMPI}
-\newcommand{\openmp}{OpenMP}
-\newcommand{\lammpi}{LAM/MPI}
-\newcommand{\mpich}{MPICH}
-\newcommand{\cmake}{CMake}
-\newcommand{\sse}{SSE}
-\newcommand{\ssetwo}{SSE2}
-\newcommand{\avx}{AVX}
-\newcommand{\fft}{FFT}
-\newcommand{\blas}{BLAS}
-\newcommand{\lapack}{LAPACK}
-\newcommand{\vmd}{VMD}
-\newcommand{\pymol}{PyMOL}
-\newcommand{\grace}{Grace}
-\newcommand{\libxmltwo}{LibXML2}
-%\newcommand{\}{}
-
-% later, make CMake keep this version current for us
-\newcommand{\fftwversion}{3.3.2}
-\newcommand{\cmakeversion}{2.8.8}
-\newcommand{\cudaversion}{3.2}
-\newcommand{\gromacsversion}{5.0}
-
-\begin{document}
-\section{Building GROMACS}
-
-These instructions pertain to building \gromacs{} \gromacsversion{}
-and newer releases. For installations instructions for old \gromacs{}
-versions, see the documentation at
-\url{http://www.gromacs.org/Documentation/Installation_Instructions_4.5}.
-
-\section{Quick and dirty installation}
-
-\begin{enumerate}
-\item Get the latest version of your compiler.
-\item Check you have \cmake{} version \cmakeversion{} or later.
-\item Unpack the \gromacs{} tarball.
-\item Make a separate build directory and change to it.
-\item Run \cmake{} with the path to the source as an argument
-\item Run make and make install
-\end{enumerate}
-Or, as a sequence of commands to execute:
-\begin{verbatim}
-tar xfz gromacs-5.0-beta1.tar.gz
-cd gromacs-gromacs-5.0-beta1
-mkdir build
-cd build
-cmake .. -DGMX_BUILD_OWN_FFTW=ON
-make
-sudo make install
-\end{verbatim}
-This will download and build first the prerequisite FFT library followed by \gromacs{}. If you already have
-FFTW installed, you can remove that argument to cmake. Overall, this build
-of \gromacs{} will be correct and reasonably fast on the
-machine upon which \cmake{} ran. If you want to get the maximum value
-for your hardware with \gromacs{}, you'll have to read further.
-Sadly, the interactions of hardware, libraries, and compilers
-are only going to continue to get more complex.
-
-\section{Prerequisites}
-\subsection{Platform}
-\gromacs{} can be compiled for any distribution of Linux, Mac OS X,
-Windows, BlueGene, Cray and many other
-architectures. Technically, it can be compiled on any platform with
-an ANSI C99 compiler, an ISO C++98 compiler, and supporting libraries,
-such as the GNU C library. However, \gromacs{} also comes with many
-hardware-specific extensions to provide very high performance on those
-platforms, and to enable these we have slightly more specific
-requirements since old compilers do not support new features, or they
-can be buggy. Not all of the C99 standard is required and some C89
-compilers (including Microsoft Visual C) will also be able to compile
-Gromacs.
-
-\subsection{Compiler}
-
-\gromacs{} requires an ANSI C compiler that complies with the C89
-standard, and an ISO C++98 compiler. For best performance, the
-\gromacs{} team strongly recommends you get the most recent version of
-your preferred compiler for your platform (e.g. GCC 4.8 or Intel 14.0
-or newer on x86 hardware). There is a large amount of \gromacs{} code
-introduced in version 4.6 that depends on effective compiler
-optimization to get high performance - the old raw assembly-language
-kernel routines are all gone. Unfortunately this makes \gromacs{} performance
-more sensitive to the compiler used, and the binary will only work on
-the hardware for which it is compiled.
-
-\begin{itemize}
-\item On Intel-based x86 hardware, we recommend you to use
-the GNU compilers version 4.7 or later or Intel compilers version 12 or later
-for best performance. The Intel compiler has historically been better at
-instruction scheduling, but recent gcc versions have proved to be as fast or
-sometimes faster than Intel.
-\item On AMD-based x86 hardware up through the ``K10'' microarchitecture (``Family 10h'')
-Thuban/Magny-Cours architecture (e.g. Opteron 6100-series processors), it is worth using the Intel compiler for
-better performance, but gcc version 4.7 and later are also reasonable.
-\item On the AMD Bulldozer architecture (Opteron 6200), AMD introduced fused multiply-add
-instructions and an "FMA4" instruction format not available on Intel x86 processors. Thus,
-on the most recent AMD processors you want to use gcc version 4.7 or later for better performance!
-The Intel compiler will only generate code for the subset also supported by Intel processors, and that
-is significantly slower right now.
-\item If you are running on Mac OS X, the best option is the Intel compiler.
-Both clang and gcc will work, but they produce lower performance and each have some
-shortcomings. Current Clang does not support OpenMP, and the current gcc ports do not
-support \avx{} instructions.
-\item For all non-x86 platforms, your best option is typically to use the vendor's
-default or recommended compiler, and check for specialized information below.
-\end{itemize}
-
-\subsubsection{Running in parallel}
-
-\gromacs{} can run in parallel on multiple cores of a single
-workstation using its built-in \threadmpi. No user action is required
-in order to enable this.
-
-If you wish to use the excellent native GPU support in \gromacs,
-\nvidia{}'s \cuda{}
-\url{http://www.nvidia.com/object/cuda_home_new.html} version
-\cudaversion{} software development kit is required, and the latest
-version is strongly encouraged. \nvidia{} GPUs with at least \nvidia{}
-compute capability 2.0 are required, e.g. Fermi or Kepler cards. CUDA
-version 3.2 is required, although you are strongly recommended to get
-the latest version and driver supported by your hardware.
-
-If you wish to run in parallel on multiple machines across a network,
-you will need to have
-\begin{itemize}
-\item an \mpi{} library installed that supports the \mpi{} 1.3
- standard, and
-\item wrapper compilers that will compile code using that library.
-\end{itemize}
-The \gromacs{} team recommends \openmpi{}
-\url{http://www.open-mpi.org/} version 1.4.1 (or higher), \mpich{}
-\url{http://www.mpich.org/} version 1.4.1 (or higher), or your
-hardware vendor's \mpi{} installation. The most recent version of
-either of this is likely to be the best. More specialized networks
-might depend on accelerations only available in the vendor's library.
- \lammpi{}
-\url{http://www.lam-mpi.org/} might work, but since it has been
-deprecated for years, it is not supported.
-
-Often \openmp{} parallelism is an advantage for \gromacs{},
-but support for this is generally built into your compiler and detected
-automatically. The one common exception is Mac OS X, where the default
-clang compiler currently does not fully support OpenMP. You can install
-gcc version 4.7 instead, but the currently available binary distribution of gcc
-uses an old system assembler that does not support \avx{} acceleration
-instructions. There are some examples on the Internet where people have
-hacked this to work, but presently the only straightforward way to get
-both OpenMP and \avx{} support on Mac OS X is to get the Intel compiler.
-This may change when clang 3.4 becomes available.
-
-In summary, for maximum performance you will need to
-examine how you will use \gromacs{}, what hardware you plan to run
-on, and whether you can afford a non-free compiler for slightly better
-performance. The only way to find out is unfortunately to test different
-options and parallelization schemes for the actual simulations you
-want to run. You will still get {\em good}\, performance with the default
-build and runtime options, but if you truly want to push your hardware
-to the performance limit, the days of just blindly starting programs
-with '\verb+mdrun+' are gone.
-
-\subsection{CMake}
-
-\gromacs{} \gromacsversion{} uses the \cmake{} build system, and
-requires version \cmakeversion{} or higher.
-
-\gromacs{} requires \cmake{} version \cmakeversion{} or higher. Lower
-versions will not work. You can check whether \cmake{} is installed,
-and what version it is, with \verb+cmake --version+. If you need to
-install \cmake{}, then first check whether your platform's package
-management system provides a suitable version, or visit
-\url{http://www.cmake.org/cmake/help/install.html} for pre-compiled
-binaries, source code and installation instructions. The \gromacs{}
-team recommends you install the most recent version of \cmake{} you
-can.
-
-\subsection{Fast Fourier Transform library}
-
-Many simulations in \gromacs{} make extensive use of fast Fourier transforms,
-and a software library to perform these is always required. We
-recommend \fftw{} \url{http://www.fftw.org/} (version 3 or higher
-only) or Intel's \mkl{} \url{http://software.intel.com/en-us/intel-mkl}.
-
-\subsubsection{\fftw{}}
-
-\fftw{} is likely to be available for your platform via its package
-management system, but there can be compatibility and significant
-performance issues associated with these packages. In particular,
-\gromacs{} simulations are normally run in single floating-point
-precision whereas the default \fftw{} package is normally in double
-precision, and good compiler options to use for \fftw{} when linked to
-\gromacs{} may not have been used. Accordingly, the \gromacs{} team
-recommends either
-\begin{itemize}
-\item that you permit the \gromacs{} installation to download and
- build \fftw{} \fftwversion{} from source automatically for you (use
- \verb+cmake -DGMX_BUILD_OWN_FFTW=ON+), or
-\item that you build \fftw{} from the source code.
-Note that the GROMACS-managed download of the FFTW tarball has a
-slight chance of posing a security risk. If you use this option, you
-will see a warning that advises how you can eliminate this risk
-(before the opportunity has arisen).
-\end{itemize}
-
-If you build \fftw{} from source yourself, get the most recent version
-and follow its installation guide available from \url{http://www.fftw.org}.
-Choose the precision (i.e. single or float vs.\ double) to match what you will
-later require for \gromacs{}. There is no need to compile with
-threading or \mpi{} support, but it does no harm. On x86 hardware,
-compile \emph{only} with \verb+--enable-sse2+ (regardless of
-precision) even if your processors can take advantage of \avx{}
-extensions. Since \gromacs{} uses fairly short transform lengths we
-do not benefit from the \fftw{} \avx{} acceleration, and because of
-memory system performance limitations, it can even degrade \gromacs{}
-performance by around 20\%. There is no way for \gromacs{} to
-limit the use to \ssetwo{} acceleration at run time if \avx{}
-support has been compiled into \fftw{}, so you need to set this at compile time.
-
-\subsubsection{\mkl{}}
-
-Using \mkl{} with the Intel Compilers version 11 or higher is very simple. Set up your
-compiler environment correctly, perhaps with a command like
-\verb+source /path/to/compilervars.sh intel64+ (or consult your local
-documentation). Then set \verb+-DGMX_FFT_LIBRARY=mkl+ when you run
-\cmake{}. In this case, \gromacs{} will also use \mkl{} for \blas{}
-and \lapack{} (see \hyperref{linear-algebra}{here}).
-
-Otherwise, you can get your hands dirty and configure \mkl{} by setting
-\verb+-DGMX_FFT_LIBRARY=mkl
--DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so"
--DMKL_INCLUDE_DIR="/full/path/to/mkl/include"+,
-where the full list (and order!) of libraries you require are found in
-Intel's \mkl{} documentation for your system.
-
-\subsection{Optional build components}
-
-\begin{itemize}
-\item Compiling to run on Nvidia GPUs requires \cuda{}
-\item Hardware-optimized \blas{} and \lapack{} libraries are useful
- for a few of the \gromacs{} utilities focused on normal modes and
- matrix manipulation, but they do not provide any benefits for normal
- simulations. Configuring these are discussed
- \hyperlink{linear-algebra}{here}.
-\item The built-in \gromacs{} trajectory viewer \verb+gmx view+ requires
- X11 and Motif/Lesstif libraries and header files. You may prefer
- to use third-party software for visualization, such as \vmd{}
- \url{http://www.ks.uiuc.edu/Research/vmd/} or \pymol{}
- \url{http://www.pymol.org/}.
-\item Running the \gromacs{} test suite requires \libxmltwo{}
-\item Building the \gromacs{} manual requires ImageMagick, pdflatex
- and bibtex
-\item The \gromacs{} utility programs often write data files in
- formats suitable for the \grace{} plotting tool, but it is
- straightforward to use these files in other plotting programs, too.
-\end{itemize}
-
-\section{Doing a build of \gromacs}
-
-This section will cover a general build of \gromacs{} with \cmake{},
-but it is not an exhaustive discussion of how to use \cmake{}. There
-are many resources available on the web, which we suggest you search
-for when you encounter problems not covered here. The material below
-applies specifically to builds on Unix-like systems, including Linux,
-and Mac OS X. For other platforms, see the specialist
-instructions below.
-
-\subsection{Configuring with \cmake{}}
-
-\cmake{} will run many tests on your system and do its best to work
-out how to build \gromacs{} for you. If you are building \gromacs{} on
-hardware that is identical to that where you will run \verb+mdrun+,
-then you can be sure that the defaults will be pretty good. The build
-configuration will for instance attempt to detect the specific hardware
-instructions available in your processor. However, if
-you want to control aspects of the build, there are plenty of things you
-can set manually.
-
-The best way to use \cmake{} to configure \gromacs{} is to do an
-``out-of-source'' build, by making another directory from which you
-will run \cmake{}. This can be a subdirectory or not, it doesn't
-matter. It also means you can never corrupt your source code by trying
-to build it! So, the only required argument on the \cmake{} command
-line is the name of the directory containing the
-\verb+CMakeLists.txt+ file of the code you want to build. For
-example, download the source tarball and use
-% TODO: keep up to date with new releases!
-\begin{verbatim}
-$ tar xfz gromacs-5.0-beta1.tgz
-$ cd gromacs-5.0-beta1
-$ mkdir build-cmake
-$ cd build-cmake
-$ cmake ..
-\end{verbatim}
-
-You will see \verb+cmake+ report the results of a large number of
-tests on your system made by \cmake{} and by \gromacs{}. These are
-written to the \cmake{} cache, kept in \verb+CMakeCache.txt+. You
-can edit this file by hand, but this is not recommended because it is
-easy to reach an inconsistent state. You should not attempt to move or
-copy this file to do another build, because file paths are hard-coded
-within it. If you mess things up, just delete this file and start
-again with '\verb+cmake+'.
-
-If there's a serious problem detected at this stage, then you will see
-a fatal error and some suggestions for how to overcome it. If you're
-not sure how to deal with that, please start by searching on the web
-(most computer problems already have known solutions!) and then
-consult the gmx-users mailing list. There are also informational
-warnings that you might like to take on board or not. Piping the
-output of \verb+cmake+ through \verb+less+ or \verb+tee+ can be
-useful, too.
-
-\cmake{} works in an iterative fashion, re-running each time a setting
-is changed to try to make sure other things are consistent. Once
-things seem consistent, the iterations stop. Once \verb+cmake+
-returns, you can see all the settings that were chosen and information
-about them by using e.g. the curses interface
-\begin{verbatim}
-$ ccmake ..
-\end{verbatim}
-You can actually use \verb+ccmake+ (available on most Unix platforms,
-if the curses library is supported) directly in the first step, but then
-most of the status messages will merely blink in the lower part
-of the terminal rather than be written to standard out. Most platforms
-including Linux, Windows, and Mac OS X even have native graphical user interfaces for
-\cmake{}, and it can create project files for almost any build environment
-you want (including Visual Studio or Xcode).
-Check out \url{http://www.cmake.org/cmake/help/runningcmake.html} for
-general advice on what you are seeing and how to navigate and change
-things. The settings you might normally want to change are already
-presented. If you make any changes, then \verb+ccmake+ will notice
-that and require that you re-configure (using '\verb+c+'), so that it
-gets a chance to make changes that depend on yours and perform more
-checking. This might require several configuration stages when you are
-using \verb+ccmake+ - when you are using \verb+cmake+ the
-iteration is done behind the scenes.
-
-A key thing to consider here is the setting of
-\verb+CMAKE_INSTALL_PREFIX+. You will need to be able to write to this
-directory in order to install \gromacs{} later, and if you change your
-mind later, changing it in the cache triggers a full re-build,
-unfortunately. So if you do not have super-user privileges on your
-machine, then you will need to choose a sensible location within your
-home directory for your \gromacs{} installation. Even if you do have
-super-user privileges, you should use them only for the installation
-phase, and never for configuring, building, or running \gromacs{}!
-
-When \verb+cmake+ or \verb+ccmake+ have completed iterating, the
-cache is stable and a build tree can be generated, with '\verb+g+' in
-\verb+ccmake+ or automatically with \verb+cmake+.
-
-You cannot attempt to change compilers after the initial run of
-\cmake{}. If you need to change, clean up and start again.
-
-\subsection{Using CMake command-line options}
-Once you become comfortable with setting and changing options, you
-may know in advance how you will configure \gromacs{}. If so, you can
-speed things up by invoking \verb+cmake+ with a command like:
-\begin{verbatim}
-$ cmake .. -DGMX_GPU=ON -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
-\end{verbatim}
-to build with GPUs, MPI and install in a custom location. You can even
-save that in a shell script to make it even easier next time. You can
-also do this kind of thing with \verb+ccmake+, but you should avoid
-this, because the options set with '\verb+-D+' will not be able to be
-changed interactively in that run of \verb+ccmake+.
-
-\subsection{SIMD support}
-\gromacs{} has extensive support for detecting and using the SIMD
-capabilities of nearly all modern HPC CPUs. If you are building
-\gromacs{} on the same hardware you will run it on, then you don't
-need to read more about this. Otherwise, you may wish to choose the
-value of \verb+GMX_SIMD+ to much the execution environment. If you
-make no choice, the default will be based on the computer on which you
-are running \cmake{}. Valid values are listed below, and the
-applicable value lowest on the list is generally the one you should
-choose:
-\begin{enumerate}
-\item \verb+None+ For use only on an architecture either lacking SIMD,
- or to which \gromacs{} has not yet been ported and none of the
- options below are applicable.
-\item \verb+SSE2+ Essentially all x86 machines in existence have this
-\item \verb+SSE4.1+ More recent x86 have this
-\item \verb+AVX_128_FMA+ More recent AMD x86 have this
-\item \verb+AVX_256+ More recent Intel x86 have this
-\item \verb+AVX2_256+ Yet more recent Intel x86 have this
-\item \verb+IBM_QPX + BlueGene/Q A2 cores have this
-\item \verb+Sparc64_HPC_ACE+ Fujitsu machines like the K computer have this
-\end{enumerate}
-The \cmake{} configure system will check that the compiler you have
-chosen can target the architecture you have chosen. mdrun will check
-further at runtime, so if in doubt, choose the lowest setting you
-think might work, and see what mdrun says. The configure system also
-works around many known issues in many versions of common HPC
-compilers.
-
-A further \verb+GMX_SIMD=Reference+ option exists, which is a special
-SIMD-like implementation written in plain C that developers can use
-when developing support in GROMACS for new SIMD architectures. It is
-not designed for use in production simulations, but if you are using
-an architecture with SIMD support to which \gromacs{} has not yet been
-ported, you may wish to try the performance of this option, in case
-the auto-vectorization in your compiler does a good job. And post on
-the \gromacs{} mailing lists, because \gromacs{} can probably be
-ported for new SIMD architectures in a few days.
-
-\subsection{CMake advanced options}
-The options that can be seen with \verb+ccmake+ are ones that we
-think a reasonable number of users might want to consider
-changing. There are a lot more options available, which you can see by
-toggling the advanced mode in \verb+ccmake+ on and off with
-'\verb+t+'. Even there, most of the variables that you might want to
-change have a '\verb+CMAKE_+' or '\verb+GMX_+' prefix. There are also
-some options that will be visible or not according to whether
-their preconditions are satisfied.
-
-\subsubsection{Portability aspects}
-Here, we consider portability aspects related to CPU instruction sets,
-for details on other topics like binaries with statical vs dynamic linking
-please consult the relevant parts of this documentation or other non-\gromacs{} specific
-resources.
-
-Most often a \gromacs{} build will by default not be portable,
-not even across hardware with the same base instruction set like x86.
-The reason for this is that hardware-specific optimizations are selected
-at configure-time, like the SIMD instruction set used in the compute-kernels.
-This selection will be done by the build system based on the capabilities
-of the build host machine or based on cross-compilation information provided
-to \cmake{} at configuration.
-
-Often it is possible to ensure portability by choosing the
-least common denominator of SIMD support, e.g. SSE2 for x86
-and ensuring the \cmake{} option \verb+GMX_USE_RDTSCP+ is off if any of the
-target CPU architectures does not support the \verb+RDTSCP+ instruction.
-However, we discourage attempts to use a single \gromacs{}
-installation when the execution environment is heterogeneous, such as
-a mix of \avx{} and earlier hardware, because this will lead to slow
-binaries (especially \verb+mdrun+), on the new hardware.
-Building two full installations and locally managing how to
-call the correct one (e.g. using the module system) is the recommended
-approach.
-Alternatively, as at the moment the \gromacs{} tools do not make
-strong use of SIMD acceleration, it can be convenient to create an installation
-with tools portable across different x86 machines, but with separate \verb+mdrun+
-binaries for each architecture.
-To achieve this, one can first build a full installation with the least common
-denominator SIMD instruction set, e.g. SSE2, then build separate \verb+mdrun+
-binaries for each architecture present in the heterogeneous environment.
-By using custom binary and library suffixes for the \verb+mdrun+-only builds,
-these can be installed to the same location as the ''generic`` tools installation.
-
-
-\subsection{Helping CMake find the right libraries/headers/programs}
-
-If libraries are installed in non-default locations their location can
-be specified using the following environment variables:
-\begin{itemize}
-\item \verb+CMAKE_INCLUDE_PATH+ for header files
-\item \verb+CMAKE_LIBRARY_PATH+ for libraries
-\item \verb+CMAKE_PREFIX_PATH+ for header, libraries and binaries
- (e.g. '\verb+/usr/local+').
-\end{itemize}
-The respective '\verb+include+', '\verb+lib+', or '\verb+bin+' is
-appended to the path. For each of these variables, a list of paths can
-be specified (on Unix separated with ":"). Note that these are
-enviroment variables (and not \cmake{} command-line arguments) and in
-a '\verb+bash+' shell are used like:
-\begin{verbatim}
-$ CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..
-\end{verbatim}
-
-The \verb+CC+ and \verb+CXX+ environment variables are also useful
-for indicating to \cmake{} which compilers to use, which can be very
-important for maximising \gromacs{} performance. Similarly,
-\verb+CFLAGS+/\verb+CXXFLAGS+ can be used to pass compiler
-options, but note that these will be appended to those set by
-\gromacs{} for your build platform and build type. You can customize
-some of this with advanced options such as \verb+CMAKE_C_FLAGS+
-and its relatives.
-
-See also: \url{http://cmake.org/Wiki/CMake_Useful_Variables#Environment_Variables}
-
-\subsection{Linear algebra libraries}\hypertarget{linear-algebra}
-As mentioned above, sometimes vendor \blas{} and \lapack{} libraries
-can provide performance enhancements for \gromacs{} when doing
-normal-mode analysis or covariance analysis. For simplicity, the text
-below will refer only to \blas{}, but the same options are available
-for \lapack{}. By default, CMake will search for \blas{}, use it if it
-is found, and otherwise fall back on a version of \blas{} internal to
-\gromacs{}. The \cmake{} option \verb+GMX_EXTERNAL_BLAS+ will be set
-accordingly. The internal versions are fine for normal use. If you
-need to specify a non-standard path to search, use
-\verb+-DCMAKE_PREFIX_PATH=/path/to/search+. If you need to specify a
-library with a non-standard name (e.g. ESSL on AIX or BlueGene), then
-set \verb+-DGMX_BLAS_USER=/path/to/reach/lib/libwhatever.a+.
-
-If you are using Intel's \mkl{} for \fft{}, then the \blas{} and
-\lapack{} it provides are used automatically. This could be
-over-ridden with \verb+GMX_BLAS_USER+, etc.
-
-On Apple platforms where the Accelerate Framework is available, these
-will be automatically used for \blas{} and \lapack{}. This could be
-over-ridden with \verb+GMX_BLAS_USER+, etc.
-
-\subsection{Native GPU acceleration}
-If you have the \cuda{} Software Development Kit installed, you can
-use \cmake{} with:
-\begin{verbatim}
-cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
-\end{verbatim}
-(or whichever path has your installation). Note that this will require
-a working C++ compiler, and in some cases you might need to handle
-this manually, e.g. with the advanced option
-\verb+CUDA_HOST_COMPILER+.
-
-Historically, Linux GPU builds have received most testing, but we want
-to support GPU builds both under x86 Linux, Windows, Mac OS X and in
-the future ARM. Any feedback on this build process (and particularly
-fixes!) are very welcome.
-
-\subsection{Static linking}
-Dynamic linking of the \gromacs{} executables will lead to a
-smaller disk footprint when installed, and so is the default on
-platforms where we believe it has been tested repeatedly and found to work.
-In general, this includes Linux, Windows, Mac OS X and BSD systems.
-Static binaries take much more space, but on some hardware and/or under
-some conditions they are necessary, most commonly when you are running a parallel
-simulation using MPI libraries (e.g. BlueGene, Cray).
-
-\begin{itemize}
-\item To link \gromacs{} binaries
-statically against the internal \gromacs{} libraries, set
-\verb+BUILD_SHARED_LIBS=OFF+.
-\item To link statically against external
-libraries as well, the \verb+GMX_PREFER_STATIC_LIBS=ON+ option can be
-used. Note, that in general \cmake{} picks up whatever is available,
-so this option only instructs \cmake{} to prefer static libraries when
-both static and shared are available. If no static version of an
-external library is available, even when the aforementioned option is
-\verb+ON+, the shared library will be used. Also note, that the resulting
-binaries will still be dynamically linked against system libraries if
-that is all that is available (common on Mac OS X).
-\end{itemize}
-
-\subsection{Changing the names of GROMACS binaries and libraries}
-It is sometimes convenient to have different versions of the same
-\gromacs{} libraries installed. The most common use cases have been
-single and double precision, and with and without \mpi{}. By default,
-\gromacs{} will suffix binaries and libraries for such builds with
-'\verb+_d+' for double precision and/or '\verb+_mpi+' for \mpi{} (and
-nothing otherwise). This can be controlled manually with
-\verb+GMX_DEFAULT_SUFFIX (ON/OFF)+, \verb+GMX_BINARY_SUFFIX+ (takes
-a string) and \verb+GMX_LIBS_SUFFIX+ (also takes a string).
-This can also be useful for resolving libary-naming conflicts with
-existing packges (\verb+GMX_PREFIX_LIBMD+ also can be useful).
-For instance, to set a custom suffix for binaries and libraries,
-one might specify:
-
-\begin{verbatim}
-cmake .. -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod
-\end{verbatim}
-
-Thus the names of all binaries and libraries will be appended with
-"\_mod."
-
-\subsection{Building \gromacs{}}
-
-Once you have a stable cache, you can build \gromacs{}. If you're not
-sure the cache is stable, you can re-run \verb+cmake ..+ or
-\verb+ccmake ..+' to see. Then you can run \verb+make+ to start the
-compilation. Before actual compilation starts, \verb+make+ checks
-that the cache is stable, so if it isn't you will see \cmake{} run
-again.
-
-So long as any changes you've made to the configuration are sensible,
-it is expected that the \verb+make+ procedure will always complete
-successfully, and give few or no warnings. The tests \gromacs{} makes
-on the settings you chooseare pretty extensive, but there are probably
-a few cases we haven't thought of yet. Search the web first for
-solutions to problems, but if you need help, ask on gmx-users, being
-sure to provide as much information as possible about what you did,
-the system you are building on, and what went wrong. This may mean
-scrolling back a long way through the output of \verb+make+ to find
-the first error message!
-
-If you have a multi-core or multi-CPU machine with \verb+N+
-processors, then using
-\begin{verbatim}
-$ make -j N
-\end{verbatim}
-will generally speed things up by quite a bit. Other make systems
-supported by \cmake{} (e.g. ninja) also work well.
-
-\subsubsection{Building only mdrun}
-
-Past versions of \gromacs{} had the ability to \verb+make mdrun+ to
-build just mdrun (and a matching install instruction). Such a build is
-useful when the configuration is only relevant for mdrun (such as with
-\mpi{} and/or GPUs, or on BlueGene or Cray), or the length of time for
-the compile-link-install cycle is relevant when developing. This is
-now supported with the \cmake{} option
-\verb+-DGMX_BUILD_MDRUN_ONLY=ON+, which will build a cut-down version
-of \verb+libgromacs+ and/or the \verb+mdrun+ binary (according to
-whether shared or static). Naturally, now \verb+make install+ acts
-only those binaries. By default, a fresh build tree with this variable
-set will default to building statically, because this is generally a
-good idea for the targets for which an mdrun-only build is
-desirable. If you re-use a build tree and change to the mdrun-only
-build, then you will inherit the setting for \verb+BUILD_SHARED_LIBS+
-from the old build, and will be warned that you may wish to manage
-\verb+BUILD_SHARED_LIBS+ yourself.
-
-\subsection{Installing \gromacs{}}
-
-Finally, \verb+make install+ will install \gromacs{} in the
-directory given in \verb+CMAKE_INSTALL_PREFIX+. If this is an system
-directory, then you will need permission to write there, and you
-should use super-user privileges only for \verb+make install+ and
-not the whole procedure.
-
-\subsection{Getting access to \gromacs{} after installation}
-
-\gromacs{} installs the script \verb+GMXRC+ in the \verb+bin+
-subdirectory of the installation directory
-(e.g. \verb+/usr/local/gromacs/bin/GMXRC+), which you should source
-from your shell:
-\begin{verbatim}
-$ source /your/installation/prefix/here/bin/GMXRC
-\end{verbatim}
-
-It will detect what kind of shell you are running and set up your
-environment for using \gromacs{}. You may wish to arrange for your
-login scripts to do this automatically; please search the web for
-instructions on how to do this for your shell.
-
-Many of the \gromacs{} programs rely on data installed in the
-\verb+share/gromacs+ subdirectory of the installation directory. By
-default, the programs will use the environment variables set in the
-\verb+GMXRC+ script, and if this is not available they will try to guess the
-path based on their own location. This usually works well unless you
-change the names of directories inside the install tree. If you still
-need to do that, you might want to recompile with the new install
-location properly set, or edit the \verb+GMXRC+ script.
-
-\subsection{Testing \gromacs{} for correctness}\label{testing}
-Since 2011, the \gromacs{} development uses an automated system where
-every new patch is subject to regression testing. While this improves
-reliability quite a lot, not everything is tested, and since we
-increasingly rely on cutting edge compiler features there is
-non-negligible risk that the default compiler on your system could
-have bugs. We have tried our best to test and refuse to use known bad
-versions in \cmake{}, but we strongly recommend that you run through
-the tests yourself. It only takes a few minutes, after which you can
-trust your build.
-
-The simplest way to run the checks is to build \gromacs{} with
-\verb+-DREGRESSIONTEST_DOWNLOAD+, and run \verb+make check+.
-\gromacs{} will automatically download and run the tests for you.
-Alternatively, you can download and unpack the tarball yourself from
-\url{http://gerrit.gromacs.org/download/regressiontests-5.0-beta1.tar.gz},
-and use the advanced \cmake{} option \verb+REGRESSIONTEST_PATH+ to
-specify the path to the unpacked tarball, which will then be used for
-testing. If the above doesn't work, then please read on.
-
-The regression tests are available from the \gromacs{} website and ftp
-site. Once you have downloaded them, unpack the tarball, source
-\verb+GMXRC+ as described above, and run \verb+./gmxtest.pl all+
-inside the regression tests folder. You can find more options
-(e.g. adding \verb+double+ when using double precision, or
-\verb+-only expanded+ to run just the tests whose names match
-``expanded'') if you just execute the script without options.
-
-Hopefully, you will get a report that all tests have passed. If there
-are individual failed tests it could be a sign of a compiler bug, or
-that a tolerance is just a tiny bit too tight. Check the output files
-the script directs you too, and try a different or newer compiler if
-the errors appear to be real. If you cannot get it to pass the
-regression tests, you might try dropping a line to the gmx-users
-mailing list, but then you should include a detailed description of
-your hardware, and the output of \verb+mdrun -version+ (which contains
-valuable diagnostic information in the header).
-
-A build with \verb+-DGMX_BUILD_MDRUN_ONLY+ cannot be tested with
-\verb+make check+ from the build tree, because most of the tests
-require a full build to run things like \verb+grompp+. To test such an
-mdrun fully requires installing it to the same location as a normal
-build of \gromacs{}, downloading the regression tests tarball manually
-as described above, sourcing the correct \verb+GMXRC+ and running the
-perl script manually. For example, from your \gromacs{} source
-directory:
-\begin{verbatim}
-mkdir build-normal
-cd build-normal
-cmake .. -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
-make -j 4
-make install
-cd ..
-mkdir build-mdrun-only
-cd build-mdrun-only
-cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
-make -j 4
-make install
-cd /to/your/unpacked/regressiontests
-source /your/installation/prefix/here/bin/GMXRC
-./gmxtest.pl all -np 2
-\end{verbatim}
-
-\subsection{Testing \gromacs{} for performance}
-We are still working on a set of benchmark systems for testing
-the performance of \gromacs{}. Until that is ready, we recommend that
-you try a few different parallelization options, and experiment with
-tools such as \verb+gmx tune_pme+.
-
-\subsection{Having difficulty?}
-You're not alone - this can be a complex task! If you encounter a
-problem with installing \gromacs{}, then there are a number of
-locations where you can find assistance. It is recommended that you
-follow these steps to find the solution:
-
-\begin{enumerate}
-\item Read the installation instructions again, taking note that you
- have followed each and every step correctly.
-\item Search the \gromacs{} website and users emailing list for
- information on the error.
-\item Search the internet using a search engine such as Google.
-\item Post to the \gromacs{} users emailing list gmx-users for
- assistance. Be sure to give a full description of what you have done
- and why you think it didn't work. Give details about the system on
- which you are installing.
- Copy and paste your command line and as
- much of the output as you think might be relevant - certainly from
- the first indication of a problem. In particular, please try to include at
- least the header from the mdrun logfile, and preferably the entire file.
- People who might volunteer to
- help you do not have time to ask you interactive detailed follow-up
- questions, so you will get an answer faster if you provide as much
- information as you think could possibly help. High quality bug reports
- tend to receive rapid high quality answers.
-\end{enumerate}
-
-\section{Special instructions for some platforms}
-
-\subsection{Building on Windows}
-Building on Windows using native compilers is rather similar to
-building on Unix, so please start by reading the above. Then, download
-and unpack the GROMACS source archive. The UNIX-standard .tar.gz
-format can be managed on Windows, but you may prefer to browse
-\url{ftp://ftp.gromacs.org/pub/gromacs} to obtain a zip format file,
-which doesn't need any external tools to unzip on recent Windows
-systems. Make a folder in which to do the out-of-source build of
-\gromacs{}. For example, make it within the folder unpacked from the
-source archive, and call it ``build-cmake''.
-
-For \cmake{}, you can either use the graphical user interface provided
-on Windows, or you can use a command line shell with instructions
-similar to the UNIX ones above. If you open a shell from within
-your IDE (e.g. Microsoft Visual Studio), it will configure the
-environment for you, but you might need to tweak this in order to
-get either a 32-bit or 64-bit build environment. The latter provides the
-fastest executable. If you use a normal Windows command shell, then
-you will need to either set up the environment to find your compilers
-and libraries yourself, or run the \verb+vcvarsall.bat+ batch script
-provided by MSVC (just like sourcing a bash script under
-Unix).
-
-With the graphical user interface you will be asked about what compilers
-to use at the initial configuration stage, and if you use the command line
-they can be set in a similar way as under UNIX.
-You will probably make your life easier and faster by using the
-new facility to download and install \fftw{} automatically.
-
-For the build, you can either load the generated solutions file into
-e.g. Visual Studio, or use the command line with \verb+cmake --build .+
-so the right tools get used.
-
-\subsection{Building on Cray}
-
-\gromacs{} builds mostly out of the box on modern Cray machines,
-but you want to use static libraries due to the peculiarities with
-parallel job execution.
-
-\subsection{Building on BlueGene}
-
-\subsubsection{BlueGene/P}
-
-There is currently no SIMD support on this platform and no plans to
-add it. The default plain C kernels will work.
-
-\subsubsection{BlueGene/Q}
-
-There is currently native acceleration on this platform for the Verlet
-cut-off scheme. There are no plans to provide accelerated kernels for
-the group cut-off scheme, but the default plain C kernels will work.
-
-Only static linking with XL compilers is supported by \gromacs{}. Dynamic
-linking would be supported by the architecture and \gromacs{}, but has no
-advantages other than disk space, and is generally discouraged on
-BlueGene for performance reasons.
-
-Computation on BlueGene floating-point units is always done in
-double-precision. However, single-precision builds of \gromacs{} are
-still normal and encouraged since they use cache more efficiently.
-The BlueGene hardware automatically
-converts values stored in single precision in memory to double
-precision in registers for computation, converts the results back to
-single precision correctly, and does so for no additional cost. As
-with other platforms, doing the whole computation in double precision
-normally shows no improvement in accuracy and costs twice as much time
-moving memory around.
-
-You need to arrange for FFTW to be installed correctly, following the
-above instructions.
-
-mpicc is used for compiling and linking. This can make it awkward to
-attempt to use IBM's optimized BLAS/LAPACK called ESSL (see the
-section on linear algebra). Since mdrun is the only part of \gromacs{}
-that should normally run on the compute nodes, and there is nearly no
-need for linear algebra support for mdrun, it is recommended to use
-the \gromacs{} built-in linear algebra routines - it is rare for this
-to be a bottleneck.
-
-The recommended configuration is to use
-\begin{verbatim}
-cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ-static-XL-CXX \
- -DCMAKE_PREFIX_PATH=/your/fftw/installation/prefix \
- -DGMX_MPI=ON \
- -DGMX_BUILD_MDRUN_ONLY=ON
-make
-make install
-\end{verbatim}
-which will build a statically-linked \mpi{}-enabled mdrun for the back
-end. Otherwise, GROMACS default configuration behaviour applies.
-
-It is possible to configure and make the remaining \gromacs{} tools
-with the compute-node toolchain, but as none of those tools are
-\mpi{}-aware and could then only run on the compute nodes, this
-would not normally be useful. Instead, these should be planned
-to run on the login node, and a separate \gromacs{} installation
-performed for that using the login node's toolchain - not the
-above platform file, or any other compute-node toolchain.
-
-Note that only the MPI build is available for the compute-node
-toolchains. The GROMACS thread-MPI or no-MPI builds are not useful at
-all on BlueGene/Q.
-
-\subsubsection{Fujitsu PRIMEHPC}
-
-This is the architecture of the K computer, which uses Fujitsu
-Sparc64viiifx chips. On this platform \gromacs{} \gromacsversion{} has
-accelerated group kernels, no accelerated Verlet kernels, and a custom
-build toolchain.
-
-\subsubsection{Intel Xeon Phi}
-
-\gromacs{} 5.0 has preliminary support for Intel Xeon Phi. Only symmetric
-(aka native) mode is supported. \gromacs{} is functional on Xeon Phi,
-but it has so far not been optimized to the same level as other
-architectures have. The performance depends among other factors on the
-system size (see ``Running in parallel''), and for now the performance
-might not be faster than CPUs. Building for Xeon Phi works almost as any
-other Unix. See the instructions above for details. The recommended
-configuration is
-\begin{verbatim}
-cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/XeonPhi
-make
-make install
-\end{verbatim}
-
-\section{Tested platforms}
-
-While it is our best belief that \gromacs{} will build and run pretty
-much everywhere, it's important that we tell you where we really know
-it works because we've tested it. We do test on Linux, Windows, and
-Mac with a range of compilers and libraries for a range of our
-configuration options. Every commit in our git source code
-repository is currently tested on x86 with gcc versions ranging
-from 4.4 through 4.8, and versions 12 and 13 of the Intel compiler as
-well as Clang version 3.1 through 3.3. For this we use a variety of GNU/Linux
-flavors and versions as well as recent version of Mac OS X.
-Under Windows we test both MSVC and the Intel compiler. For details, you can
-have a look at the continuous integration server at \url{http://jenkins.gromacs.org}.
-
-We test irregularly on BlueGene/Q, Cray,
-Fujitsu PRIMEHPC, Google Native Client and other environments. In
-the future we expect ARM to be an important test target too, but this
-is currently not included.
-
-Contributions to this section are welcome.
-
-If there is interest, we might set up the ability for users to
-contribute test results to Jenkins.
-
-\end{document}
--- /dev/null
+#
+# This file is part of the GROMACS molecular simulation package.
+#
+# Copyright (c) 2014, by the GROMACS development team, led by
+# Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
+# and including many others, as listed in the AUTHORS file in the
+# top-level source directory and at http://www.gromacs.org.
+#
+# GROMACS is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public License
+# as published by the Free Software Foundation; either version 2.1
+# of the License, or (at your option) any later version.
+#
+# GROMACS is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with GROMACS; if not, see
+# http://www.gnu.org/licenses, or write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# If you want to redistribute modifications to GROMACS, please
+# consider that scientific software is very special. Version
+# control is crucial - bugs must be traceable. We will be happy to
+# consider code for inclusion in the official distribution, but
+# derived work must not be called official GROMACS. Details are found
+# in the README & COPYING files - if they are missing, get the
+# official version at http://www.gromacs.org.
+#
+# To help us fund GROMACS development, we humbly ask that you cite
+# the research papers on the package. Check out http://www.gromacs.org.
+
+# This module looks for Pandoc, and sets PANDOC_EXECUTABLE to the
+# location of its binary.
+#
+# It respects the variable Pandoc_FIND_QUIETLY
+
+include(FindPackageHandleStandardArgs)
+
+if(Pandoc_FIND_QUIETLY OR DEFINED PANDOC_EXECUTABLE)
+ set(PANDOC_FIND_QUIETLY TRUE)
+endif()
+
+find_program(PANDOC_EXECUTABLE
+ NAMES pandoc
+ DOC "Pandoc - a universal document converter")
+
+FIND_PACKAGE_HANDLE_STANDARD_ARGS(Pandoc REQUIRED_VARS PANDOC_EXECUTABLE)
+
+mark_as_advanced(PANDOC_EXECUTABLE)
# Noise is acceptable when there is a GPU or the user required one.
set(FIND_CUDA_QUIETLY QUIET)
endif()
- # We support CUDA >=v4.0 on *nix, but <= v4.1 doesn't work with MSVC
- if(MSVC)
- find_package(CUDA 4.1 ${FIND_CUDA_QUIETLY})
- else()
- find_package(CUDA 4.0 ${FIND_CUDA_QUIETLY})
- endif()
+ find_package(CUDA ${REQUIRED_CUDA_VERSION} ${FIND_CUDA_QUIETLY})
+
# Cmake 2.8.12 (and CMake 3.0) introduced a new bug where the cuda
# library dir is added twice as an rpath on APPLE, which in turn causes
# the install_name_tool to wreck the binaries when it tries to remove this
https://developer.nvidia.com/cuda-gpus")
endif()
- set(CUDA_NOTFOUND_MESSAGE "mdrun supports native GPU acceleration on NVIDIA hardware with compute capability >=2.0 (Fermi or later). This requires the NVIDIA CUDA toolkit, which was not found. Its location can be hinted by setting the CUDA_TOOLKIT_ROOT_DIR CMake option (does not work as an environment variable). The typical location would be /usr/local/cuda[-version]. Note that CPU or GPU acceleration can be selected at runtime.
+ set(CUDA_NOTFOUND_MESSAGE "mdrun supports native GPU acceleration on NVIDIA hardware with compute capability >= ${REQUIRED_CUDA_COMPUTE_CAPABILITY} (Fermi or later). This requires the NVIDIA CUDA toolkit, which was not found. Its location can be hinted by setting the CUDA_TOOLKIT_ROOT_DIR CMake option (does not work as an environment variable). The typical location would be /usr/local/cuda[-version]. Note that CPU or GPU acceleration can be selected at runtime.
${_msg}")
unset(_msg)
--- /dev/null
+#
+# This file is part of the GROMACS molecular simulation package.
+#
+# Copyright (c) 2014, by the GROMACS development team, led by
+# Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
+# and including many others, as listed in the AUTHORS file in the
+# top-level source directory and at http://www.gromacs.org.
+#
+# GROMACS is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public License
+# as published by the Free Software Foundation; either version 2.1
+# of the License, or (at your option) any later version.
+#
+# GROMACS is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with GROMACS; if not, see
+# http://www.gnu.org/licenses, or write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# If you want to redistribute modifications to GROMACS, please
+# consider that scientific software is very special. Version
+# control is crucial - bugs must be traceable. We will be happy to
+# consider code for inclusion in the official distribution, but
+# derived work must not be called official GROMACS. Details are found
+# in the README & COPYING files - if they are missing, get the
+# official version at http://www.gromacs.org.
+#
+# To help us fund GROMACS development, we humbly ask that you cite
+# the research papers on the package. Check out http://www.gromacs.org.
+
+set(INSTALL_GUIDE_BUILD_IS_POSSIBLE OFF)
+if(NOT ${CMAKE_BINARY_DIR} STREQUAL ${CMAKE_SOURCE_DIR})
+ # We can only build the install guide outside of the source dir
+ find_package(Pandoc)
+ if(PANDOC_EXECUTABLE)
+ set(INSTALL_GUIDE_BUILD_IS_POSSIBLE ON)
+ endif()
+endif()
+
+if(INSTALL_GUIDE_BUILD_IS_POSSIBLE)
+ # Do replacement of CMake variables for version strings, etc.
+ configure_file(configure-install-guide.cmake.in
+ ${CMAKE_CURRENT_BINARY_DIR}/configure-install-guide.cmake
+ @ONLY)
+
+ # This defers until build time the configuration of
+ # install-guide.md, which could be faster
+ add_custom_command(
+ OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/install-guide.md
+ COMMAND ${CMAKE_COMMAND}
+ -P ${CMAKE_CURRENT_BINARY_DIR}/configure-install-guide.cmake
+ DEPENDS
+ ${CMAKE_CURRENT_BINARY_DIR}/configure-install-guide.cmake
+ ${CMAKE_CURRENT_SOURCE_DIR}/install-guide.md
+ COMMENT "Configuring install guide"
+ VERBATIM
+ )
+
+ # Make the HTML install guide
+ add_custom_command(
+ OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/install-guide.html
+ COMMAND pandoc -t html -o ${CMAKE_CURRENT_BINARY_DIR}/install-guide.html install-guide.md -s --toc
+ DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/install-guide.md
+ VERBATIM
+ )
+
+ # Make the INSTALL file for CPack for the tarball
+ add_custom_command(
+ OUTPUT ${CMAKE_BINARY_DIR}/INSTALL
+ COMMAND pandoc -t plain -o ../INSTALL install-guide.md
+ DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/install-guide.md
+ VERBATIM
+ )
+
+ # Add a top-level target for the others to hook onto
+ add_custom_target(install-guide
+ DEPENDS
+ ${CMAKE_CURRENT_BINARY_DIR}/install-guide.html
+ ${CMAKE_BINARY_DIR}/INSTALL
+ VERBATIM
+ )
+endif()
--- /dev/null
+# Helper script that defers configure_file until build time, so
+# that changes to the files configured here don't trigger
+# a global reconfigure
+
+set(SRC_DIR "@CMAKE_CURRENT_SOURCE_DIR@")
+set(BIN_DIR "@CMAKE_CURRENT_BINARY_DIR@")
+
+set(PROJECT_VERSION "@PROJECT_VERSION@")
+set(GMX_CMAKE_MINIMUM_REQUIRED_VERSION "@GMX_CMAKE_MINIMUM_REQUIRED_VERSION@")
+set(REQUIRED_CUDA_VERSION "@REQUIRED_CUDA_VERSION@")
+set(REQUIRED_CUDA_COMPUTE_CAPABILITY "@REQUIRED_CUDA_COMPUTE_CAPABILITY@")
+set(REGRESSIONTEST_VERSION "@REGRESSIONTEST_VERSION@")
+
+configure_file(${SRC_DIR}/install-guide.md
+ ${BIN_DIR}/install-guide.md @ONLY)
--- /dev/null
+% Installation guide for GROMACS @PROJECT_VERSION@
+
+# Building GROMACS #
+
+These instructions pertain to building GROMACS
+@PROJECT_VERSION@. Up-to-date installation instructions may be found
+at <http://www.gromacs.org/Documentation/Installation_Instructions>.
+
+# Quick and dirty installation #
+
+1. Get the latest version of your C and C++ compilers.
+2. Check that you have CMake version @GMX_CMAKE_MINIMUM_REQUIRED_VERSION@ or later.
+3. Get and unpack the latest version of the GROMACS tarball.
+4. Make a separate build directory and change to it.
+5. Run `cmake` with the path to the source as an argument
+6. Run `make` and `make install`
+
+Or, as a sequence of commands to execute:
+
+ tar xfz gromacs-@PROJECT_VERSION@.tar.gz
+ cd gromacs-@PROJECT_VERSION@
+ mkdir build
+ cd build
+ cmake .. -DGMX_BUILD_OWN_FFTW=ON
+ make
+ sudo make install
+ source /usr/local/gromacs/bin/GMXRC
+
+This will download and build first the prerequisite FFT library
+followed by GROMACS. If you already have FFTW installed, you can
+remove that argument to `cmake`. Overall, this build of GROMACS will
+be correct and reasonably fast on the machine upon which `cmake`
+ran. If you want to get the maximum value for your hardware with
+GROMACS, you will have to read further. Sadly, the interactions of
+hardware, libraries, and compilers are only going to continue to get
+more complex.
+
+# Typical GROMACS installation #
+
+As above, and with further details below, but you should consider
+using the following [CMake options](#using-cmake-command-line-options) with the
+appropriate value instead of `xxx` :
+
+* `-DCMAKE_C_COMPILER=xxx` equal to the name of the C99 [compiler](#compiler) you wish to use (or the environment variable `CC`)
+* `-DCMAKE_CXX_COMPILER=xxx` equal to the name of the C++98 [compiler](#compiler) you wish to use (or the environment variable `CXX`)
+* `-DGMX_MPI=on` to build using an [MPI](#mpi-support) wrapper compiler
+* `-DGMX_GPU=on` to build using nvcc to run with an NVIDIA [GPU](#native-gpu-acceleration)
+* `-DGMX_SIMD=xxx` to specify the level of [SIMD support](#simd-support) of the node on which `mdrun` will run
+* `-DGMX_BUILD_MDRUN_ONLY=on` to [build only the `mdrun` binary](#building-only-mdrun), e.g. for compute cluster back-end nodes
+* `-DGMX_DOUBLE=on` to run GROMACS in double precision (slower, and not normally useful)
+* `-DCMAKE_PREFIX_PATH=xxx` to add a non-standard location for CMake to [search for libraries](#helping-cmake-find-the-right-librariesheadersprograms)
+* `-DCMAKE_INSTALL_PREFIX=xxx` to install GROMACS to a non-standard location (default `/usr/local/gromacs`)
+* `-DBUILD_SHARED_LIBS=off` to turn off the building of [shared libraries](#static-linking)
+* `-DGMX_FFT_LIBRARY=xxx` to select whether to use `fftw`, `mkl` or `fftpack` libraries for [FFT support](#fast-fourier-transform-library)
+* `-DCMAKE_BUILD_TYPE=Debug` to build GROMACS in debug mode
+
+# Building older GROMACS versions #
+
+For installation instructions for old GROMACS versions, see the
+documentation at
+<http://www.gromacs.org/Documentation/Installation_Instructions_4.5>
+and
+<http://www.gromacs.org/Documentation/Installation_Instructions_4.6>
+
+# Prerequisites #
+
+## Platform ##
+
+GROMACS can be compiled for many operating systems and architectures.
+These include any distribution of Linux, Mac OS X or Windows, and
+architectures including x86, AMD64/x86-64, PPC, ARM v7 and SPARC VIII.
+
+## Compiler ##
+
+Technically, GROMACS can be compiled on any platform with an ANSI C99
+and C++98 compiler, and their respective standard C/C++ libraries.
+Getting good performance on an OS and architecture requires choosing a
+good compiler. In practice, many compilers struggle to do a good job
+optimizing the GROMACS architecture-optimized SIMD kernels.
+
+For best performance, the GROMACS team strongly recommends you get the
+most recent version of your preferred compiler for your platform.
+There is a large amount of GROMACS code that depends on effective
+compiler optimization to get high performance. This makes GROMACS
+performance sensitive to the compiler used, and the binary will often
+only work on the hardware for which it is compiled.
+
+* In particular, GROMACS includes a lot of explicit SIMD
+(single instruction, multiple data) optimization that can use
+assembly instructions available on most modern processors. This
+can have a substantial effect on performance, but for recent
+processors you also need a similarly recent compiler that includes
+support for the corresponding SIMD instruction set to get this
+benefit. The configuration does a good job at detecting this,
+and you will usually get warnings if GROMACS and your hardware
+support a more recent instruction set than your compiler.
+
+* On Intel-based x86 hardware, we recommend you to use the GNU
+compilers version 4.7 or later or Intel compilers version 12 or later
+for best performance. The Intel compiler has historically been better
+at instruction scheduling, but recent gcc versions have proved to be
+as fast or sometimes faster than Intel.
+
+* The Intel and GNU compilers produce much faster GROMACS executables
+than the PGI and Cray compilers.
+
+* On AMD-based x86 hardware up through the "K10" microarchitecture
+("Family 10h") Thuban/Magny-Cours architecture (e.g. Opteron
+6100-series processors), it is worth using the Intel compiler for
+better performance, but gcc version 4.7 and later are also reasonable.
+
+* On the AMD Bulldozer architecture (Opteron 6200), AMD introduced
+fused multiply-add instructions and an "FMA4" instruction format not
+available on Intel x86 processors. Thus, on the most recent AMD
+processors you want to use gcc version 4.7 or later for best
+performance! The Intel compiler will only generate code for the subset
+also supported by Intel processors, and that is significantly slower.
+
+* If you are running on Mac OS X, the best option is the Intel
+compiler. Both clang and gcc will work, but they produce lower
+performance and each have some shortcomings. Current Clang does not
+support OpenMP. This may change when clang 3.5 becomes available.
+
+* For all non-x86 platforms, your best option is typically to use the
+vendor's default or recommended compiler, and check for specialized
+information below.
+
+## Compiling with parallelization options ##
+
+GROMACS can run in parallel on multiple cores of a single
+workstation using its built-in thread-MPI. No user action is required
+in order to enable this.
+
+### GPU support ###
+
+If you wish to use the excellent native GPU support in GROMACS,
+NVIDIA's [CUDA](http://www.nvidia.com/object/cuda_home_new.html)
+version @REQUIRED_CUDA_VERSION@ software development kit is required,
+and the latest version is strongly encouraged. NVIDIA GPUs with at
+least NVIDIA compute capability @REQUIRED_CUDA_COMPUTE_CAPABILITY@ are
+required, e.g. Fermi or Kepler cards. You are strongly recommended to
+get the latest CUDA version and driver supported by your hardware, but
+beware of possible performance regressions in newer CUDA versions on
+older hardware. Note that while some CUDA compilers (nvcc) might not
+officially support recent versions of gcc as the back-end compiler, we
+still recommend that you at least use a gcc version recent enough to
+get the best SIMD support for your CPU, since GROMACS always runs some
+code on the CPU. It is most reliable to use the same C++ compiler
+version for GROMACS code as used as the back-end compiler for nvcc,
+but it could be faster to mix compiler versions to suit particular
+contexts.
+
+### MPI support ###
+
+If you wish to run in parallel on multiple machines across a network,
+you will need to have
+
+* an MPI library installed that supports the MPI 1.3
+ standard, and
+* wrapper compilers that will compile code using that library.
+
+The GROMACS team recommends [OpenMPI](http://www.open-mpi.org) version
+1.6 (or higher), [MPICH](http://www.mpich.org) version 1.4.1 (or
+higher), or your hardware vendor's MPI installation. The most recent
+version of either of these is likely to be the best. More specialized
+networks might depend on accelerations only available in the vendor's
+library. [LAMMPI](http://www.lam-mpi.org) might work, but since it has
+been deprecated for years, it is not supported.
+
+Often [OpenMP](http://en.wikipedia.org/wiki/OpenMP) parallelism is an
+advantage for GROMACS, but support for this is generally built into
+your compiler and detected automatically.
+
+In summary, for maximum performance you will need to examine how you
+will use GROMACS, what hardware you plan to run on, and whether you
+can afford a non-free compiler for slightly better
+performance. Unfortunately, the only way to find out is to test
+different options and parallelization schemes for the actual
+simulations you want to run. You will still get *good*,
+performance with the default build and runtime options, but if you
+truly want to push your hardware to the performance limit, the days of
+just blindly starting programs with `mdrun` are gone.
+
+## CMake ##
+
+GROMACS @PROJECT_VERSION@ uses the CMake build system, and requires
+version @GMX_CMAKE_MINIMUM_REQUIRED_VERSION@ or higher. Lower versions
+will not work. You can check whether CMake is installed, and what
+version it is, with `cmake --version`. If you need to install CMake,
+then first check whether your platform's package management system
+provides a suitable version, or visit
+<http://www.cmake.org/cmake/help/install.html> for pre-compiled
+binaries, source code and installation instructions. The GROMACS team
+recommends you install the most recent version of CMake you can.
+
+## Fast Fourier Transform library ##
+
+Many simulations in GROMACS make extensive use of fast Fourier
+transforms, and a software library to perform these is always
+required. We recommend [FFTW](http://www.fftw.org) (version 3 or
+higher only) or
+[Intel MKL](http://software.intel.com/en-us/intel-mkl). The choice of
+library can be set with `cmake -DGMX_FFT_LIBRARY=<name>`, where
+`<name>` is one of `fftw`, `mkl`, or `fftpack`. FFTPACK is bundled
+with GROMACS as a fallback, and is acceptable if mdrun performance is
+not a priority.
+
+### FFTW ###
+
+FFTW is likely to be available for your platform via its package
+management system, but there can be compatibility and significant
+performance issues associated with these packages. In particular,
+GROMACS simulations are normally run in single floating-point
+precision whereas the default FFTW package is normally in double
+precision, and good compiler options to use for FFTW when linked to
+GROMACS may not have been used. Accordingly, the GROMACS team
+recommends either
+
+* that you permit the GROMACS installation to download and
+ build FFTW from source automatically for you (use
+ `cmake -DGMX_BUILD_OWN_FFTW=ON`), or
+* that you build FFTW from the source code.
+
+Note that the GROMACS-managed download of the FFTW tarball has a
+slight chance of posing a security risk. If you use this option, you
+will see a warning that advises how you can eliminate this risk
+(before the opportunity has arisen).
+
+If you build FFTW from source yourself, get the most recent version
+and follow its [installation
+guide](http://www.fftw.org/doc/Installation-and-Customization.html#Installation-and-Customization).
+Choose the precision (i.e. single or float vs. double) to match what
+you will later require for GROMACS. There is no need to compile with
+threading or MPI support, but it does no harm. On x86 hardware,
+compile *only* with `--enable-sse2` (regardless of precision) even if
+your processors can take advantage of AVX extensions. Since GROMACS
+uses fairly short transform lengths we do not benefit from the FFTW
+AVX acceleration, and because of memory system performance
+limitations, it can even degrade GROMACS performance by around
+20%. There is no way for GROMACS to limit the use to SSE2 SIMD at run
+time if AVX support has been compiled into FFTW, so you need to set
+this at compile time.
+
+### MKL ###
+
+Using MKL with the Intel Compilers version 11 or higher is very
+simple. Set up your compiler environment correctly, perhaps with a
+command like `source /path/to/compilervars.sh intel64` (or consult
+your local documentation). Then set `-DGMX_FFT_LIBRARY=mkl` when you
+run cmake. In this case, GROMACS will also use MKL for BLAS and LAPACK
+(see
+[linear algebra libraries](#linear-algebra-libraries)). Generally,
+there is no advantage in using MKL with GROMACS, and FFTW is often
+faster.
+
+Otherwise, you can get your hands dirty and configure MKL by setting
+
+ -DGMX_FFT_LIBRARY=mkl
+ -DMKL_LIBRARIES="/full/path/to/libone.so;/full/path/to/libtwo.so"
+ -DMKL_INCLUDE_DIR="/full/path/to/mkl/include"
+
+where the full list (and order!) of libraries you require are found in
+Intel's MKL documentation for your system.
+
+## Optional build components ##
+
+* Compiling to run on NVIDIA GPUs requires CUDA
+* An external Boost library can be used to provide better
+ implementation support for smart pointers and exception handling,
+ but the GROMACS source bundles a subset of Boost 1.55.0 as a fallback
+* Hardware-optimized BLAS and LAPACK libraries are useful
+ for a few of the GROMACS utilities focused on normal modes and
+ matrix manipulation, but they do not provide any benefits for normal
+ simulations. Configuring these are discussed at
+ [linear algebra libraries](#linear-algebra-libraries).
+* The built-in GROMACS trajectory viewer `gmx view` requires X11 and
+ Motif/Lesstif libraries and header files. You may prefer to use
+ third-party software for visualization, such as
+ [VMD](http://www.ks.uiuc.edu/Research/vmd) or
+ [PyMOL](http://www.pymol.org).
+* An external TNG library for trajectory-file handling can be used,
+ but TNG 1.6 is bundled in the GROMACS source already
+* zlib is used by TNG for compressing some kinds of trajectory data
+* Running the GROMACS test suite requires libxml2
+* Building the GROMACS documentation requires ImageMagick, pdflatex,
+ bibtex, doxygen and pandoc.
+* The GROMACS utility programs often write data files in formats
+ suitable for the Grace plotting tool, but it is straightforward to
+ use these files in other plotting programs, too.
+
+# Doing a build of GROMACS #
+
+This section will cover a general build of GROMACS with CMake, but it
+is not an exhaustive discussion of how to use CMake. There are many
+resources available on the web, which we suggest you search for when
+you encounter problems not covered here. The material below applies
+specifically to builds on Unix-like systems, including Linux, and Mac
+OS X. For other platforms, see the specialist instructions below.
+
+## Configuring with CMake ##
+
+CMake will run many tests on your system and do its best to work out
+how to build GROMACS for you. If your build machine is the same as
+your target machine, then you can be sure that the defaults will be
+pretty good. The build configuration will for instance attempt to
+detect the specific hardware instructions available in your
+processor. However, if you want to control aspects of the build, or
+you are compiling on a cluster head node for back-end nodes with a
+different architecture, there are plenty of things you can set
+manually.
+
+The best way to use CMake to configure GROMACS is to do an
+"out-of-source" build, by making another directory from which you will
+run CMake. This can be outside the source directory, or a subdirectory
+of it. It also means you can never corrupt your source code by trying
+to build it! So, the only required argument on the CMake command line
+is the name of the directory containing the `CMakeLists.txt` file of
+the code you want to build. For example, download the source tarball
+and use
+
+ $ tar xfz gromacs-@PROJECT_VERSION@.tgz
+ $ cd gromacs-@PROJECT_VERSION@
+ $ mkdir build-gromacs
+ $ cd build-gromacs
+ $ cmake ..
+
+You will see `cmake` report a sequence of results of tests and
+detections done by the GROMACS build system. These are written to the
+`cmake` cache, kept in `CMakeCache.txt`. You can edit this file by
+hand, but this is not recommended because you could make a mistake.
+You should not attempt to move or copy this file to do another build,
+because file paths are hard-coded within it. If you mess things up,
+just delete this file and start again with `cmake`.
+
+If there is a serious problem detected at this stage, then you will see
+a fatal error and some suggestions for how to overcome it. If you are
+not sure how to deal with that, please start by searching on the web
+(most computer problems already have known solutions!) and then
+consult the gmx-users mailing list. There are also informational
+warnings that you might like to take on board or not. Piping the
+output of `cmake` through `less` or `tee` can be
+useful, too.
+
+Once `cmake` returns, you can see all the settings that were chosen
+and information about them by using e.g. the curses interface
+
+ $ ccmake ..
+
+You can actually use `ccmake` (available on most Unix platforms,
+if the curses library is supported) directly in the first step, but then
+most of the status messages will merely blink in the lower part
+of the terminal rather than be written to standard out. Most platforms
+including Linux, Windows, and Mac OS X even have native graphical user interfaces for
+`cmake`, and it can create project files for almost any build environment
+you want (including Visual Studio or Xcode).
+Check out <http://www.cmake.org/cmake/help/runningcmake.html> for
+general advice on what you are seeing and how to navigate and change
+things. The settings you might normally want to change are already
+presented. You may make changes, then re-configure (using `c`), so that it
+gets a chance to make changes that depend on yours and perform more
+checking. This might require several configuration stages when you are
+using `ccmake` - when you are using `cmake` the
+iteration is done behind the scenes.
+
+A key thing to consider here is the setting of
+`CMAKE_INSTALL_PREFIX`. You will need to be able to write to this
+directory in order to install GROMACS later, and if you change your
+mind later, changing it in the cache triggers a full re-build,
+unfortunately. So if you do not have super-user privileges on your
+machine, then you will need to choose a sensible location within your
+home directory for your GROMACS installation. Even if you do have
+super-user privileges, you should use them only for the installation
+phase, and never for configuring, building, or running GROMACS!
+
+When `cmake` or `ccmake` have completed iterating, the
+cache is stable and a build tree can be generated, with `g` in
+`ccmake` or automatically with `cmake`.
+
+You cannot attempt to change compilers after the initial run of
+`cmake`. If you need to change, clean up, and start again.
+
+### Using CMake command-line options ###
+
+Once you become comfortable with setting and changing options, you may
+know in advance how you will configure GROMACS. If so, you can speed
+things up by invoking `cmake` and passing the various options at once
+on the command line. This can be done by setting cache variable at the
+cmake invocation using the `-DOPTION=VALUE`; note that some
+environment variables are also taken into account, in particular
+variables like CC, CXX, FCC (which may be familiar to autoconf users).
+
+For example, the following command line
+
+ $ cmake .. -DGMX_GPU=ON -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/marydoe/programs
+
+can be used to build with GPUs, MPI and install in a custom
+location. You can even save that in a shell script to make it even
+easier next time. You can also do this kind of thing with `ccmake`,
+but you should avoid this, because the options set with `-D` will not
+be able to be changed interactively in that run of `ccmake`.
+
+### SIMD support ###
+
+GROMACS has extensive support for detecting and using the SIMD
+capabilities of many modern HPC CPU architectures. If you are building
+GROMACS on the same hardware you will run it on, then you don't need
+to read more about this, unless you are getting configuration warnings
+you do not understand. By default, the GROMACS build system will
+detect the SIMD instruction set supported by the CPU architecture (on
+which the configuring is done), and thus pick the best
+available SIMD parallelization supported by GROMACS. The build system
+will also check that the compiler and linker used also support the
+selected SIMD instruction set and issue a fatal error if they
+do not.
+
+Valid values are listed below, and the
+applicable value lowest on the list is generally the one you should
+choose:
+
+1. `None` For use only on an architecture either lacking SIMD,
+ or to which GROMACS has not yet been ported and none of the
+ options below are applicable.
+2. `SSE2` This SIMD instruction set was introduced in Intel
+ processors in 2001, and AMD in 2003. Essentially all x86
+ machines in existence have this, so it might be a good choice if
+ you need to support dinosaur x86 computers too.
+3. `SSE4.1` Present in all Intel core processors since 2007,
+ but notably not in AMD magny-cours. Still, almost all recent
+ processors support this, so this can also be considered a good
+ baseline if you are content with portability between reasonably
+ modern processors.
+4. `AVX_128_FMA` AMD bulldozer processors (2011) have this.
+ Unfortunately Intel and AMD have diverged the last few years;
+ If you want good performance on modern AMD processors
+ you have to use this since it also allows the reset of the
+ code to use AMD 4-way fused multiply-add instructions. The drawback
+ is that your code will not run on Intel processors at all.
+5. `AVX_256` This instruction set is present on Intel processors
+ since Sandy Bridge (2011), where it is the best choice unless
+ you have an even more recent CPU that supports AVX2. While this
+ code will work on recent AMD processors, it is significantly
+ less efficient than the AVX_128_FMA choice above - do not be
+ fooled to assume that 256 is better than 128 in this case.
+6. `AVX2_256` Present on Intel Haswell processors released in 2013,
+ and it will also enable Intel 3-way fused multiply-add instructions.
+ This code will not work on AMD CPUs.
+7. `IBM_QPX ` BlueGene/Q A2 cores have this.
+8. `Sparc64_HPC_ACE` Fujitsu machines like the K computer have this.
+
+The CMake configure system will check that the compiler you have
+chosen can target the architecture you have chosen. `mdrun` will check
+further at runtime, so if in doubt, choose the lowest setting you
+think might work, and see what `mdrun` says. The configure system also
+works around many known issues in many versions of common HPC
+compilers. However, since the options also enable general compiler
+flags for the platform in question, you can end up in situations
+where e.g. an `AVX_128_FMA` binary will just crash on any
+Intel machine, since the code will try to execute general illegal
+instructions (inserted by the compiler) before `mdrun` gets to the
+architecture detection routines.
+
+A further `GMX_SIMD=Reference` option exists, which is a special
+SIMD-like implementation written in plain C that developers can use
+when developing support in GROMACS for new SIMD architectures. It is
+not designed for use in production simulations, but if you are using
+an architecture with SIMD support to which GROMACS has not yet been
+ported, you may wish to try this option instead of the default
+`GMX_SIMD=None`, as it can often out-perform this when the
+auto-vectorization in your compiler does a good job. And post on the
+GROMACS mailing lists, because GROMACS can probably be ported for new
+SIMD architectures in a few days.
+
+### CMake advanced options ###
+
+The options that are displayed in the default view of `ccmake` are
+ones that we think a reasonable number of users might want to consider
+changing. There are a lot more options available, which you can see by
+toggling the advanced mode in `ccmake` on and off with `t`. Even
+there, most of the variables that you might want to change have a
+`CMAKE_` or `GMX_` prefix. There are also some options that will be
+visible or not according to whether their preconditions are satisfied.
+
+### Helping CMake find the right libraries/headers/programs ###
+
+If libraries are installed in non-default locations their location can
+be specified using the following environment variables:
+
+* `CMAKE_INCLUDE_PATH` for header files
+* `CMAKE_LIBRARY_PATH` for libraries
+* `CMAKE_PREFIX_PATH` for header, libraries and binaries
+ (e.g. `/usr/local`).
+
+The respective `include`, `lib`, or `bin` is
+appended to the path. For each of these variables, a list of paths can
+be specified (on Unix, separated with ":"). Note that these are
+enviroment variables (and not `cmake` command-line arguments) and in
+a `bash` shell are used like:
+
+ $ CMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda cmake ..
+
+Alternatively, these variables are also `cmake` options, so they can
+be set like `-DCMAKE_PREFIX_PATH=/opt/fftw:/opt/cuda`.
+
+The `CC` and `CXX` environment variables are also useful
+for indicating to `cmake` which compilers to use, which can be very
+important for maximising GROMACS performance. Similarly,
+`CFLAGS`/`CXXFLAGS` can be used to pass compiler
+options, but note that these will be appended to those set by
+GROMACS for your build platform and build type. You can customize
+some of this with advanced options such as `CMAKE_C_FLAGS`
+and its relatives.
+
+See also: <http://cmake.org/Wiki/CMake_Useful_Variables#Environment_Variables>
+
+### Native GPU acceleration ###
+If you have the CUDA Toolkit installed, you can use `cmake` with:
+
+ $ cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
+
+(or whichever path has your installation). In some cases, you might
+need to specify manually which of your C++ compilers should be used,
+e.g. with the advanced option `CUDA_HOST_COMPILER`.
+
+The GPU acceleration has been tested on AMD64/x86-64 platforms with
+Linux, Mac OS X and Windows operating systems, but Linux is the
+best-tested and supported of these. Linux running on ARM v7 (32 bit)
+CPUs also works.
+
+### Static linking ###
+Dynamic linking of the GROMACS executables will lead to a
+smaller disk footprint when installed, and so is the default on
+platforms where we believe it has been tested repeatedly and found to work.
+In general, this includes Linux, Windows, Mac OS X and BSD systems.
+Static binaries take much more space, but on some hardware and/or under
+some conditions they are necessary, most commonly when you are running a parallel
+simulation using MPI libraries (e.g. BlueGene, Cray).
+
+* To link GROMACS binaries
+statically against the internal GROMACS libraries, set
+`-DBUILD_SHARED_LIBS=OFF`.
+* To link statically against external (non-system) libraries as well,
+the `-DGMX_PREFER_STATIC_LIBS=ON` option can be used. Note, that in
+general `cmake` picks up whatever is available, so this option only
+instructs `cmake` to prefer static libraries when both static and
+shared are available. If no static version of an external library is
+available, even when the aforementioned option is `ON`, the shared
+library will be used. Also note, that the resulting binaries will
+still be dynamically linked against system libraries on platforms
+where that is the default. To use static system libraries, additional
+compiler/linker flags are necessary, e.g. `-static-libgcc
+-static-libstdc++`.
+
+### Portability aspects ###
+
+Here, we consider portability aspects related to CPU instruction sets,
+for details on other topics like binaries with statical vs dynamic
+linking please consult the relevant parts of this documentation or
+other non-GROMACS specific resources.
+
+A GROMACS build will normally not be portable, not even across
+hardware with the same base instruction set like x86. Non-portable
+hardware-specific optimizations are selected at configure-time, such
+as the SIMD instruction set used in the compute-kernels. This
+selection will be done by the build system based on the capabilities
+of the build host machine or based on cross-compilation information
+provided to `cmake` at configuration.
+
+Often it is possible to ensure portability by choosing the least
+common denominator of SIMD support, e.g. SSE2 for x86, and ensuring
+the you use `cmake -DGMX_USE_RDTSCP=off` if any of the target CPU
+architectures does not support the `RDTSCP` instruction. However, we
+discourage attempts to use a single GROMACS installation when the
+execution environment is heterogeneous, such as a mix of AVX and
+earlier hardware, because this will lead to programs (especially
+`mdrun`) that run slowly on the new hardware. Building two full
+installations and locally managing how to call the correct one
+(e.g. using the module system) is the recommended
+approach. Alternatively, as at the moment the GROMACS tools do not
+make strong use of SIMD acceleration, it can be convenient to create
+an installation with tools portable across different x86 machines, but
+with separate `mdrun` binaries for each architecture. To achieve this,
+one can first build a full installation with the
+least-common-denominator SIMD instruction set, e.g. `-DGMX_SIMD=SSE2`,
+then build separate `mdrun` binaries for each architecture present in
+the heterogeneous environment. By using custom binary and library
+suffixes for the `mdrun`-only builds, these can be installed to the
+same location as the "generic" tools installation. Building [only the
+`mdrun` binary](#building-only-mdrun) is possible by setting the `-DGMX_BUILD_MDRUN_ONLY=ON`
+option.
+
+### Linear algebra libraries ###
+
+As mentioned above, sometimes vendor BLAS and LAPACK libraries
+can provide performance enhancements for GROMACS when doing
+normal-mode analysis or covariance analysis. For simplicity, the text
+below will refer only to BLAS, but the same options are available
+for LAPACK. By default, CMake will search for BLAS, use it if it
+is found, and otherwise fall back on a version of BLAS internal to
+GROMACS. The `cmake` option `-DGMX_EXTERNAL_BLAS=on` will be set
+accordingly. The internal versions are fine for normal use. If you
+need to specify a non-standard path to search, use
+`-DCMAKE_PREFIX_PATH=/path/to/search`. If you need to specify a
+library with a non-standard name (e.g. ESSL on AIX or BlueGene), then
+set `-DGMX_BLAS_USER=/path/to/reach/lib/libwhatever.a`.
+
+If you are using Intel MKL for FFT, then the BLAS and
+LAPACK it provides are used automatically. This could be
+over-ridden with `GMX_BLAS_USER`, etc.
+
+On Apple platforms where the Accelerate Framework is available, these
+will be automatically used for BLAS and LAPACK. This could be
+over-ridden with `GMX_BLAS_USER`, etc.
+
+### Changing the names of GROMACS binaries and libraries ###
+
+It is sometimes convenient to have different versions of the same
+GROMACS programs installed. The most common use cases have been single
+and double precision, and with and without MPI. This mechanism can
+also be used to install side-by-side multiple versions of `mdrun`
+optimized for different CPU architectures, as mentioned previously.
+
+By default, GROMACS will suffix programs and libraries for such builds
+with `_d` for double precision and/or `_mpi` for MPI (and nothing
+otherwise). This can be controlled manually with `GMX_DEFAULT_SUFFIX
+(ON/OFF)`, `GMX_BINARY_SUFFIX` (takes a string) and `GMX_LIBS_SUFFIX`
+(also takes a string). For instance, to set a custom suffix for
+programs and libraries, one might specify:
+
+ cmake .. -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_mod -DGMX_LIBS_SUFFIX=_mod
+
+Thus the names of all programs and libraries will be appended with
+`_mod`.
+
+## Building GROMACS ##
+
+Once you have configured with `cmake`, you can build GROMACS. It is
+expected that the `make` procedure will always complete successfully,
+and give few or no warnings. The tests GROMACS makes on the settings
+you choose are pretty extensive, but there are probably a few cases we
+have not thought of yet. Search the web first for solutions to
+problems, but if you need help, ask on gmx-users, being sure to
+provide as much information as possible about what you did, the system
+you are building on, and what went wrong. This may mean scrolling back
+a long way through the output of `make` to find the first error
+message!
+
+If you have a multi-core or multi-CPU machine with `N`
+processors, then using
+ $ make -j N
+will generally speed things up by quite a bit. Other build generator systems
+supported by `cmake` (e.g. `ninja`) also work well.
+
+### Building only mdrun ###
+
+Past versions of the build system offered "mdrun" and "install-mdrun"
+targets (similarly for other programs too) to build and install only
+the mdrun program, respectively. Such a build is useful when the
+configuration is only relevant for `mdrun` (such as with
+parallelization options for MPI, SIMD, GPUs, or on BlueGene or Cray),
+or the length of time for the compile-link-install cycle is relevant
+when developing.
+
+This is now supported with the `cmake` option
+`-DGMX_BUILD_MDRUN_ONLY=ON`, which will build a cut-down version of
+`libgromacs` and/or the `mdrun` program (according to whether shared
+or static). Naturally, now `make install` installs only those
+products. By default, mdrun-only builds will default to static linking
+against GROMACS libraries, because this is generally a good idea for
+the targets for which an mdrun-only build is desirable. If you re-use
+a build tree and change to the mdrun-only build, then you will inherit
+the setting for `BUILD_SHARED_LIBS` from the old build, and will be
+warned that you may wish to manage `BUILD_SHARED_LIBS` yourself.
+
+## Installing GROMACS ##
+
+Finally, `make install` will install GROMACS in the
+directory given in `CMAKE_INSTALL_PREFIX`. If this is a system
+directory, then you will need permission to write there, and you
+should use super-user privileges only for `make install` and
+not the whole procedure.
+
+## Getting access to GROMACS after installation ##
+
+GROMACS installs the script `GMXRC` in the `bin`
+subdirectory of the installation directory
+(e.g. `/usr/local/gromacs/bin/GMXRC`), which you should source
+from your shell:
+
+ $ source /your/installation/prefix/here/bin/GMXRC
+
+It will detect what kind of shell you are running and set up your
+environment for using GROMACS. You may wish to arrange for your
+login scripts to do this automatically; please search the web for
+instructions on how to do this for your shell.
+
+Many of the GROMACS programs rely on data installed in the
+`share/gromacs` subdirectory of the installation directory. By
+default, the programs will use the environment variables set in the
+`GMXRC` script, and if this is not available they will try to guess the
+path based on their own location. This usually works well unless you
+change the names of directories inside the install tree. If you still
+need to do that, you might want to recompile with the new install
+location properly set, or edit the `GMXRC` script.
+
+## Testing GROMACS for correctness ##
+
+Since 2011, the GROMACS development uses an automated system where
+every new code change is subject to regression testing on a number of
+platforms and software combinations. While this improves
+reliability quite a lot, not everything is tested, and since we
+increasingly rely on cutting edge compiler features there is
+non-negligible risk that the default compiler on your system could
+have bugs. We have tried our best to test and refuse to use known bad
+versions in `cmake`, but we strongly recommend that you run through
+the tests yourself. It only takes a few minutes, after which you can
+trust your build.
+
+The simplest way to run the checks is to build GROMACS with
+`-DREGRESSIONTEST_DOWNLOAD`, and run `make check`.
+GROMACS will automatically download and run the tests for you.
+Alternatively, you can download and unpack the tarball yourself from
+<http://gerrit.gromacs.org/download/regressiontests-@REGRESSIONTEST_VERSION@.tar.gz>,
+and use the advanced `cmake` option `REGRESSIONTEST_PATH` to
+specify the path to the unpacked tarball, which will then be used for
+testing. If the above does not work, then please read on.
+
+The regression tests are available from the GROMACS website and ftp
+site. Once you have downloaded them, unpack the tarball, source
+`GMXRC` as described above, and run `./gmxtest.pl all`
+inside the regression tests folder. You can find more options
+(e.g. adding `double` when using double precision, or
+`-only expanded` to run just the tests whose names match
+"expanded") if you just execute the script without options.
+
+Hopefully, you will get a report that all tests have passed. If there
+are individual failed tests it could be a sign of a compiler bug, or
+that a tolerance is just a tiny bit too tight. Check the output files
+the script directs you too, and try a different or newer compiler if
+the errors appear to be real. If you cannot get it to pass the
+regression tests, you might try dropping a line to the gmx-users
+mailing list, but then you should include a detailed description of
+your hardware, and the output of `mdrun -version` (which contains
+valuable diagnostic information in the header).
+
+A build with `-DGMX_BUILD_MDRUN_ONLY` cannot be tested with
+`make check` from the build tree, because most of the tests
+require a full build to run things like `grompp`. To test such an
+mdrun fully requires installing it to the same location as a normal
+build of GROMACS, downloading the regression tests tarball manually
+as described above, sourcing the correct `GMXRC` and running the
+perl script manually. For example, from your GROMACS source
+directory:
+
+ $ mkdir build-normal
+ $ cd build-normal
+ $ cmake .. -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
+ $ make -j 4
+ $ make install
+ $ cd ..
+ $ mkdir build-mdrun-only
+ $ cd build-mdrun-only
+ $ cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DGMX_BUILD_MDRUN_ONLY=ON -DCMAKE_INSTALL_PREFIX=/your/installation/prefix/here
+ $ make -j 4
+ $ make install
+ $ cd /to/your/unpacked/regressiontests
+ $ source /your/installation/prefix/here/bin/GMXRC
+ $ ./gmxtest.pl all -np 2
+
+If your `mdrun` program has been suffixed in a non-standard way, then
+the `./gmxtest.pl -mdrun` option will let you specify that name to the
+test machinery. You can use `./gmxtest.pl -double` to test the
+double-precision version. You can use `./gmxtest.pl -crosscompiling`
+to stop the test harness attempting to check that the programs can
+be run.
+
+
+## Testing GROMACS for performance ##
+We are still working on a set of benchmark systems for testing
+the performance of GROMACS. Until that is ready, we recommend that
+you try a few different parallelization options, and experiment with
+tools such as `gmx tune_pme`.
+
+## Having difficulty? ##
+You are not alone - this can be a complex task! If you encounter a
+problem with installing GROMACS, then there are a number of
+locations where you can find assistance. It is recommended that you
+follow these steps to find the solution:
+
+1. Read the installation instructions again, taking note that you
+ have followed each and every step correctly.
+
+2. Search the GROMACS website and users emailing list for information
+ on the error. Adding
+ "site:https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users"
+ to a Google search may help filter better results.
+
+3. Search the internet using a search engine such as Google.
+
+4. Post to the GROMACS users emailing list gmx-users for
+ assistance. Be sure to give a full description of what you have
+ done and why you think it did not work. Give details about the
+ system on which you are installing. Copy and paste your command
+ line and as much of the output as you think might be relevant -
+ certainly from the first indication of a problem. In particular,
+ please try to include at least the header from the mdrun logfile,
+ and preferably the entire file. People who might volunteer to help
+ you do not have time to ask you interactive detailed follow-up
+ questions, so you will get an answer faster if you provide as much
+ information as you think could possibly help. High quality bug
+ reports tend to receive rapid high quality answers.
+
+# Special instructions for some platforms #
+
+## Building on Windows ##
+
+Building on Windows using native compilers is rather similar to
+building on Unix, so please start by reading the above. Then, download
+and unpack the GROMACS source archive. Make a folder in which to do
+the out-of-source build of GROMACS. For example, make it within the
+folder unpacked from the source archive, and call it `build-gromacs`.
+
+For CMake, you can either use the graphical user interface provided on
+Windows, or you can use a command line shell with instructions similar
+to the UNIX ones above. If you open a shell from within your IDE
+(e.g. Microsoft Visual Studio), it will configure the environment for
+you, but you might need to tweak this in order to get either a 32-bit
+or 64-bit build environment. The latter provides the fastest
+executable. If you use a normal Windows command shell, then you will
+need to either set up the environment to find your compilers and
+libraries yourself, or run the `vcvarsall.bat` batch script provided
+by MSVC (just like sourcing a bash script under Unix).
+
+With the graphical user interface, you will be asked about what
+compilers to use at the initial configuration stage, and if you use
+the command line they can be set in a similar way as under UNIX. You
+will probably make your life easier and faster by using the new
+facility to download and install FFTW automatically.
+
+For the build, you can either load the generated solutions file into
+e.g. Visual Studio, or use the command line with `cmake --build` so
+the right tools get used.
+
+## Building on Cray ##
+
+GROMACS builds mostly out of the box on modern Cray machines, but
+* you may need to specify the use of static or dynamic libraries
+ (depending on the machine) with `-DBUILD_SHARED_LIBS=off`,
+* you may need to set the F77 environmental variable to `ftn` when
+ compiling FFTW,
+* you may need to use `-DCMAKE_SKIP_RPATH=YES`, and
+* you may need to modify the CMakeLists.txt files to specify the
+ `BUILD_SEARCH_END_STATIC` target property.
+
+## Building on BlueGene ##
+
+### BlueGene/Q ###
+
+There is currently native acceleration on this platform for the Verlet
+cut-off scheme. There are no plans to provide accelerated kernels for
+the group cut-off scheme, but the default plain C kernels will work
+(slowly).
+
+Only static linking with XL compilers is supported by GROMACS. Dynamic
+linking would be supported by the architecture and GROMACS, but has no
+advantages other than disk space, and is generally discouraged on
+BlueGene for performance reasons.
+
+Computation on BlueGene floating-point units is always done in
+double-precision. However, mixed-precision builds of GROMACS are still
+normal and encouraged since they use cache more efficiently. The
+BlueGene hardware automatically converts values stored in single
+precision in memory to double precision in registers for computation,
+converts the results back to single precision correctly, and does so
+for no additional cost. As with other platforms, doing the whole
+computation in double precision normally shows no improvement in
+accuracy and costs twice as much time moving memory around.
+
+You need to arrange for FFTW to be installed correctly, following the
+above instructions.
+
+`mpicc` is used for compiling and linking. This can make it awkward to
+attempt to use IBM's optimized BLAS/LAPACK called ESSL (see the
+section on
+[linear algebra libraries](#linear-algebra-libraries)). Since mdrun is
+the only part of GROMACS that should normally run on the compute
+nodes, and there is nearly no need for linear algebra support for
+mdrun, it is recommended to use the GROMACS built-in linear algebra
+routines - it is rare for this to run slowly.
+
+The recommended configuration is to use
+
+ cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/BlueGeneQ-static-XL-CXX \
+ -DCMAKE_PREFIX_PATH=/your/fftw/installation/prefix \
+ -DGMX_MPI=ON \
+ -DGMX_BUILD_MDRUN_ONLY=ON
+ make
+ make install
+
+which will build a statically-linked MPI-enabled mdrun for the compute
+nodes. Otherwise, GROMACS default configuration behaviour applies.
+
+It is possible to configure and make the remaining GROMACS tools with
+the compute-node toolchain, but as none of those tools are MPI-aware
+and could then only run on the compute nodes, this would not normally
+be useful. Instead, these should be planned to run on the login node,
+and a separate GROMACS installation performed for that using the login
+node's toolchain - not the above platform file, or any other
+compute-node toolchain.
+
+Note that only the MPI build is available for the compute-node
+toolchains. The GROMACS thread-MPI or no-MPI builds are not useful at
+all on BlueGene/Q.
+
+### BlueGene/P ###
+
+There is currently no SIMD support on this platform and no plans to
+add it. The default plain C kernels will work.
+
+### Fujitsu PRIMEHPC ###
+
+This is the architecture of the K computer, which uses Fujitsu
+`Sparc64VIIIfx` chips. On this platform, GROMACS @PROJECT_VERSION@ has
+accelerated group kernels, no accelerated Verlet kernels, and a custom
+build toolchain.
+
+### Intel Xeon Phi ###
+
+GROMACS @PROJECT_VERSION@ has preliminary support for Intel Xeon Phi. Only symmetric
+(aka native) mode is supported. GROMACS is functional on Xeon Phi, but
+it has so far not been optimized to the same level as other
+architectures have. The performance depends among other factors on the
+system size, and for
+now the performance might not be faster than CPUs. Building for Xeon
+Phi works almost as any other Unix. See the instructions above for
+details. The recommended configuration is
+
+ cmake .. -DCMAKE_TOOLCHAIN_FILE=Platform/XeonPhi
+ make
+ make install
+
+# Tested platforms #
+
+While it is our best belief that GROMACS will build and run pretty
+much everywhere, it is important that we tell you where we really know
+it works because we have tested it. We do test on Linux, Windows, and
+Mac with a range of compilers and libraries for a range of our
+configuration options. Every commit in our git source code repository
+is currently tested on x86 with gcc versions ranging from 4.4 through
+4.7, and versions 12 and 13 of the Intel compiler as well as Clang
+version 3.1 through 3.4. For this, we use a variety of GNU/Linux
+flavors and versions as well as recent version of Mac OS X. Under
+Windows we test both MSVC and the Intel compiler. For details, you can
+have a look at the continuous integration server at
+<http://jenkins.gromacs.org>.
+
+We test irregularly on ARM v7, BlueGene/Q, Cray, Fujitsu PRIMEHPC, Google
+Native Client and other environments, and with other compilers and
+compiler versions, too.