correlated motions. It uses the covariance matrix :math:`C` of the
atomic coordinates:
-.. math::
-
- C_{ij} = \left \langle
- M_{ii}^{\frac{1}{2}} (x_i - \langle x_i \rangle)
- M_{jj}^{\frac{1}{2}} (x_j - \langle x_j \rangle)
- \right \rangle
+.. math:: C_{ij} = \left \langle
+ M_{ii}^{\frac{1}{2}} (x_i - \langle x_i \rangle)
+ M_{jj}^{\frac{1}{2}} (x_j - \langle x_j \rangle)
+ \right \rangle
+ :label: eqncovmatrixcoord
where :math:`M` is a diagonal matrix containing the masses of the atoms
(mass-weighted analysis) or the unit matrix (non-mass weighted
analysis). :math:`C` is a symmetric :math:`3N \times 3N` matrix, which
can be diagonalized with an orthonormal transformation matrix :math:`R`:
-.. math::
-
- R^T C R = \mbox{diag}(\lambda_1,\lambda_2,\ldots,\lambda_{3N})
- ~~~~\mbox{where}~~\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{3N}
+.. math:: R^T C R = \mbox{diag}(\lambda_1,\lambda_2,\ldots,\lambda_{3N})
+ ~~~~\mbox{where}~~\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{3N}
+ :label: eqnorthnormtransformmatrix
The columns of :math:`R` are the eigenvectors, also called principal or
essential modes. :math:`R` defines a transformation to a new coordinate
the principal components :math:`p_i(t)`:
.. math:: {\bf p}(t) = R^T M^{\frac{1}{2}} ({\bf x}(t) - \langle {\bf x} \rangle)
+ :label: eqnprinccomponents
The eigenvalue :math:`\lambda_i` is the mean square fluctuation of
principal component :math:`i`. The first few principal modes often
filtered along one (or more) principal modes. For one principal mode
:math:`i` this goes as follows:
-.. math::
-
- {\bf x}^f(t) =
- \langle {\bf x} \rangle + M^{-\frac{1}{2}} R_{ * i} \, p_i(t)
+.. math:: {\bf x}^f(t) =
+ \langle {\bf x} \rangle + M^{-\frac{1}{2}} R_{ * i} \, p_i(t)
+ :label: eqnprincmodei
When the analysis is performed on a macromolecule, one often wants to
remove the overall rotation and translation to look at the internal
diagonally-dominant matrix :math:`A` of size :math:`3N \times 3N` the
square root can be calculated as:
-.. math::
-
- A^\frac{1}{2} =
- R \, \mbox{diag}(\lambda_1^\frac{1}{2},\lambda_2^\frac{1}{2},\ldots,\lambda_{3N}^\frac{1}{2}) \, R^T
+.. math:: A^\frac{1}{2} =
+ R \, \mbox{diag}(\lambda_1^\frac{1}{2},\lambda_2^\frac{1}{2},\ldots,\lambda_{3N}^\frac{1}{2}) \, R^T
+ :label: eqnmatrixsquareroot
It can be verified easily that the product of this matrix with itself
gives :math:`A`. Now we can define a difference :math:`d` between
covariance matrices :math:`A` and :math:`B` as follows:
-.. math::
-
- \begin{aligned}
- d(A,B) & = & \sqrt{\mbox{tr}\left(\left(A^\frac{1}{2} - B^\frac{1}{2}\right)^2\right)
- }
- \\ & = &
- \sqrt{\mbox{tr}\left(A + B - 2 A^\frac{1}{2} B^\frac{1}{2}\right)}
- \\ & = &
- \left( \sum_{i=1}^N \left( \lambda_i^A + \lambda_i^B \right)
- - 2 \sum_{i=1}^N \sum_{j=1}^N \sqrt{\lambda_i^A \lambda_j^B}
- \left(R_i^A \cdot R_j^B\right)^2 \right)^\frac{1}{2}\end{aligned}
+.. math:: \begin{aligned}
+ d(A,B) & = & \sqrt{\mbox{tr}\left(\left(A^\frac{1}{2} - B^\frac{1}{2}\right)^2\right)
+ }
+ \\ & = &
+ \sqrt{\mbox{tr}\left(A + B - 2 A^\frac{1}{2} B^\frac{1}{2}\right)}
+ \\ & = &
+ \left( \sum_{i=1}^N \left( \lambda_i^A + \lambda_i^B \right)
+ - 2 \sum_{i=1}^N \sum_{j=1}^N \sqrt{\lambda_i^A \lambda_j^B}
+ \left(R_i^A \cdot R_j^B\right)^2 \right)^\frac{1}{2}\end{aligned}
+ :label: eqnmatrixdiff
where tr is the trace of a matrix. We can now define the overlap
:math:`s` as:
.. math:: s(A,B) = 1 - \frac{d(A,B)}{\sqrt{\mbox{tr}A + \mbox{tr} B}}
+ :label: eqnmatrixoverlap
The overlap is 1 if and only if matrices :math:`A` and :math:`B` are
identical. It is 0 when the sampled subspaces are completely orthogonal.
a reference subspace spanned by :math:`n` orthonormal vectors
:math:`{\bf v}_1,\ldots,{\bf v}_n` can be quantified as follows:
-.. math::
-
- \mbox{overlap}({\bf v},{\bf w}) =
- \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^m ({\bf v}_i \cdot {\bf w}_j)^2
+.. math:: \mbox{overlap}({\bf v},{\bf w}) =
+ \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^m ({\bf v}_i \cdot {\bf w}_j)^2
+ :label: eqnsubspaceoverlap
The overlap will increase with increasing :math:`m` and will be 1 when
set :math:`{\bf v}` is a subspace of set :math:`{\bf w}`. The
The eigenvalues are proportional to the index to the power
:math:`-2`. The cosine content is defined as:
-.. math::
-
- \frac{2}{T}
- \left( \int_0^T \cos\left(\frac{i \pi t}{T}\right) \, p_i(t) \mbox{d} t \right)^2
- \left( \int_0^T p_i^2(t) \mbox{d} t \right)^{-1}
+.. math:: \frac{2}{T}
+ \left( \int_0^T \cos\left(\frac{i \pi t}{T}\right) \, p_i(t) \mbox{d} t \right)^2
+ \left( \int_0^T p_i^2(t) \mbox{d} t \right)^{-1}
+ :label: eqneigenvaluecosine
When the cosine content of the first few principal components is close
to 1, the largest fluctuations are not connected with the potential, but