Matrix functions module

This module aims to provide various methods for the computation of matrix functions. More...

Classes

class  Eigen::MatrixComplexPowerReturnValue< class >
 Proxy for the matrix power of some matrix (expression).
 
struct  Eigen::MatrixExponentialReturnValue< class >
 Proxy for the matrix exponential of some matrix (expression).
 
class  Eigen::MatrixFunctionReturnValue< class >
 Proxy for the matrix function of some matrix (expression).
 
class  Eigen::MatrixLogarithmReturnValue< class >
 Proxy for the matrix logarithm of some matrix (expression).
 
class  Eigen::MatrixPower< MatrixType >
 Class for computing matrix powers. More...
 
class  Eigen::MatrixPowerAtomic< MatrixType >
 Class for computing matrix powers. More...
 
class  Eigen::MatrixPowerParenthesesReturnValue< MatrixType >
 Proxy for the matrix power of some matrix. More...
 
class  Eigen::MatrixPowerReturnValue< class >
 Proxy for the matrix power of some matrix (expression).
 
class  Eigen::MatrixSquareRootReturnValue< class >
 Proxy for the matrix square root of some matrix (expression).
 

Functions

template<typename MatrixType , typename ResultType >
void Eigen::matrix_sqrt_quasi_triangular (const MatrixType &arg, ResultType &result)
 Compute matrix square root of quasi-triangular matrix. More...
 
template<typename MatrixType , typename ResultType >
void Eigen::matrix_sqrt_triangular (const MatrixType &arg, ResultType &result)
 Compute matrix square root of triangular matrix. More...
 

Detailed Description

This module aims to provide various methods for the computation of matrix functions.

To use this module, add

at the start of your source file.

This module defines the following MatrixBase methods.

These methods are the main entry points to this module.

Matrix functions are defined as follows. Suppose that \( f \) is an entire function (that is, a function on the complex plane that is everywhere complex differentiable). Then its Taylor series

\[ f(0) + f'(0) x + \frac{f''(0)}{2} x^2 + \frac{f'''(0)}{3!} x^3 + \cdots \]

converges to \( f(x) \). In this case, we can define the matrix function by the same series:

\[ f(M) = f(0) + f'(0) M + \frac{f''(0)}{2} M^2 + \frac{f'''(0)}{3!} M^3 + \cdots \]

MatrixBase methods defined in the MatrixFunctions module

The remainder of the page documents the following MatrixBase methods which are defined in the MatrixFunctions module.

MatrixBase::cos()

Compute the matrix cosine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::cos() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > cos(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Ma square matrix.
Returns
expression representing \( \cos(M) \).

This function computes the matrix cosine. Use ArrayBase::cos() for computing the entry-wise cosine.

The implementation calls matrixFunction() with StdStemFunctions::cos().

See also
sin() for an example.

MatrixBase::cosh()

Compute the matrix hyberbolic cosine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::cosh() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > cosh(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Ma square matrix.
Returns
expression representing \( \cosh(M) \)

This function calls matrixFunction() with StdStemFunctions::cosh().

See also
sinh() for an example.

MatrixBase::exp()

Compute the matrix exponential.

const MatrixExponentialReturnValue<Derived> MatrixBase<Derived>::exp() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > exp(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Mmatrix whose exponential is to be computed.
Returns
expression representing the matrix exponential of M.

The matrix exponential of \( M \) is defined by

\[ \exp(M) = \sum_{k=0}^\infty \frac{M^k}{k!}. \]

The matrix exponential can be used to solve linear ordinary differential equations: the solution of \( y' = My \) with the initial condition \( y(0) = y_0 \) is given by \( y(t) = \exp(M) y_0 \).

The matrix exponential is different from applying the exp function to all the entries in the matrix. Use ArrayBase::exp() if you want to do the latter.

The cost of the computation is approximately \( 20 n^3 \) for matrices of size \( n \). The number 20 depends weakly on the norm of the matrix.

The matrix exponential is computed using the scaling-and-squaring method combined with Padé approximation. The matrix is first rescaled, then the exponential of the reduced matrix is computed approximant, and then the rescaling is undone by repeated squaring. The degree of the Padé approximant is chosen such that the approximation error is less than the round-off error. However, errors may accumulate during the squaring phase.

Details of the algorithm can be found in: Nicholas J. Higham, "The scaling and squaring method for the matrix exponential revisited," SIAM J. Matrix Anal. Applic., 26:1179–1193, 2005.

Example: The following program checks that

\[ \exp \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] = \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right]. \]

This corresponds to a rotation of \( \frac14\pi \) radians around the z-axis.

#include <iostream>
using namespace Eigen;
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(3,3);
A << 0, -pi/4, 0,
pi/4, 0, 0,
0, 0, 0;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix exponential of A is:\n" << A.exp() << "\n\n";
}
int main()
Definition: BVH_Example.cpp:25
SparseMatrix< double > A(n, n)
Matrix< double, Dynamic, Dynamic > MatrixXd
: TensorContractionSycl.h, provides various tensor contraction kernel for SYCL backend
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > acos(const Eigen::AutoDiffScalar< DerType > &x)

Output:

The matrix A is:
        0 -0.785398         0
 0.785398         0         0
        0         0         0

The matrix exponential of A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

Note
M has to be a matrix of float, double, long double complex<float>, complex<double>, or complex<long double> .

MatrixBase::log()

Compute the matrix logarithm.

const MatrixLogarithmReturnValue<Derived> MatrixBase<Derived>::log() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > log(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Minvertible matrix whose logarithm is to be computed.
Returns
expression representing the matrix logarithm root of M.

The matrix logarithm of \( M \) is a matrix \( X \) such that \( \exp(X) = M \) where exp denotes the matrix exponential. As for the scalar logarithm, the equation \( \exp(X) = M \) may have multiple solutions; this function returns a matrix whose eigenvalues have imaginary part in the interval \( (-\pi,\pi] \).

The matrix logarithm is different from applying the log function to all the entries in the matrix. Use ArrayBase::log() if you want to do the latter.

In the real case, the matrix \( M \) should be invertible and it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed). In the complex case, it only needs to be invertible.

This function computes the matrix logarithm using the Schur-Parlett algorithm as implemented by MatrixBase::matrixFunction(). The logarithm of an atomic block is computed by MatrixLogarithmAtomic, which uses direct computation for 1-by-1 and 2-by-2 blocks and an inverse scaling-and-squaring algorithm for bigger blocks, with the square roots computed by MatrixBase::sqrt().

Details of the algorithm can be found in Section 11.6.2 of: Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. ISBN 978-0-898716-46-7.

Example: The following program checks that

\[ \log \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right] = \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]. \]

This corresponds to a rotation of \( \frac14\pi \) radians around the z-axis. This is the inverse of the example used in the documentation of exp().

#include <iostream>
using namespace Eigen;
int main()
{
using std::sqrt;
MatrixXd A(3,3);
A << 0.5*sqrt(2), -0.5*sqrt(2), 0,
0.5*sqrt(2), 0.5*sqrt(2), 0,
0, 0, 1;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix logarithm of A is:\n" << A.log() << "\n";
}
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > sqrt(const Eigen::AutoDiffScalar< DerType > &x)
const Eigen::CwiseUnaryOp< Eigen::internal::scalar_sqrt_op< typename Derived::Scalar >, const Derived > sqrt(const Eigen::ArrayBase< Derived > &x)

Output:

The matrix A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

The matrix logarithm of A is:
-8.86512e-17    -0.785398            0
    0.785398 -8.86512e-17            0
           0            0            0
Note
M has to be a matrix of float, double, long double, complex<float>, complex<double>, or complex<long double>.
See also
MatrixBase::exp(), MatrixBase::matrixFunction(), class MatrixLogarithmAtomic, MatrixBase::sqrt().

MatrixBase::pow()

Compute the matrix raised to arbitrary real power.

const MatrixPowerReturnValue<Derived> MatrixBase<Derived>::pow(RealScalar p) const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(internal::remove_all_t< DerType >, typename internal::traits< internal::remove_all_t< DerType >>::Scalar, product) > pow(const Eigen::AutoDiffScalar< DerType > &x, const typename internal::traits< internal::remove_all_t< DerType >>::Scalar &y)
Parameters
[in]Mbase of the matrix power, should be a square matrix.
[in]pexponent of the matrix power.

The matrix power \( M^p \) is defined as \( \exp(p \log(M)) \), where exp denotes the matrix exponential, and log denotes the matrix logarithm. This is different from raising all the entries in the matrix to the p-th power. Use ArrayBase::pow() if you want to do the latter.

If p is complex, the scalar type of M should be the type of p . \( M^p \) simply evaluates into \( \exp(p \log(M)) \). Therefore, the matrix \( M \) should meet the conditions to be an argument of matrix logarithm.

If p is real, it is casted into the real scalar type of M. Then this function computes the matrix power using the Schur-Padé algorithm as implemented by class MatrixPower. The exponent is split into integral part and fractional part, where the fractional part is in the interval \( (-1, 1) \). The main diagonal and the first super-diagonal is directly computed.

If M is singular with a semisimple zero eigenvalue and p is positive, the Schur factor \( T \) is reordered with Givens rotations, i.e.

\[ T = \left[ \begin{array}{cc} T_1 & T_2 \\ 0 & 0 \end{array} \right] \]

where \( T_1 \) is invertible. Then \( T^p \) is given by

\[ T^p = \left[ \begin{array}{cc} T_1^p & T_1^{-1} T_1^p T_2 \\ 0 & 0 \end{array}. \right] \]

Warning
Fractional power of a matrix with a non-semisimple zero eigenvalue is not well-defined. We introduce an assertion failure against inaccurate result, e.g.
#include <iostream>
int main()
{
A << 0, 0, 2, 3,
0, 0, 4, 5,
0, 0, 6, 7,
0, 0, 8, 9;
std::cout << A.pow(0.37) << std::endl;
// The 1 makes eigenvalue 0 non-semisimple.
A.coeffRef(0, 1) = 1;
// This fails if EIGEN_NO_DEBUG is undefined.
std::cout << A.pow(0.37) << std::endl;
return 0;
}
Scalar & coeffRef(Index row, Index col)

Details of the algorithm can be found in: Nicholas J. Higham and Lijing Lin, "A Schur-Pad&eacute; algorithm for fractional powers of a matrix," SIAM J. Matrix Anal. Applic., 32(3):1056–1078, 2011.

Example: The following program checks that

\[ \left[ \begin{array}{ccc} \cos1 & -\sin1 & 0 \\ \sin1 & \cos1 & 0 \\ 0 & 0 & 1 \end{array} \right]^{\frac14\pi} = \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right]. \]

This corresponds to \( \frac14\pi \) rotations of 1 radian around the z-axis.

#include <iostream>
using namespace Eigen;
int main()
{
const double pi = std::acos(-1.0);
A << cos(1), -sin(1), 0,
sin(1), cos(1), 0,
0 , 0 , 1;
std::cout << "The matrix A is:\n" << A << "\n\n"
"The matrix power A^(pi/4) is:\n" << A.pow(pi/4) << std::endl;
return 0;
}
Matrix< double, 3, 3 > Matrix3d
const Eigen::CwiseUnaryOp< Eigen::internal::scalar_cos_op< typename Derived::Scalar >, const Derived > cos(const Eigen::ArrayBase< Derived > &x)
const Eigen::CwiseUnaryOp< Eigen::internal::scalar_sin_op< typename Derived::Scalar >, const Derived > sin(const Eigen::ArrayBase< Derived > &x)

Output:

The matrix A is:
 0.540302 -0.841471         0
 0.841471  0.540302         0
        0         0         1

The matrix power A^(pi/4) is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

MatrixBase::pow() is user-friendly. However, there are some circumstances under which you should use class MatrixPower directly. MatrixPower can save the result of Schur decomposition, so it's better for computing various powers for the same matrix.

Example:

#include <iostream>
using namespace Eigen;
int main()
{
MatrixPower<Matrix4cd> Apow(A);
std::cout << "The matrix A is:\n" << A << "\n\n"
"A^3.1 is:\n" << Apow(3.1) << "\n\n"
"A^3.3 is:\n" << Apow(3.3) << "\n\n"
"A^3.7 is:\n" << Apow(3.7) << "\n\n"
"A^3.9 is:\n" << Apow(3.9) << std::endl;
return 0;
}
static const RandomReturnType Random()

Output:

The matrix A is:
 (-0.211234,0.680375)   (0.10794,-0.444451)   (0.434594,0.271423) (-0.198111,-0.686642)
   (0.59688,0.566198) (0.257742,-0.0452059)  (0.213938,-0.716795) (-0.782382,-0.740419)
 (-0.604897,0.823295) (0.0268018,-0.270431) (-0.514226,-0.967399)  (-0.563486,0.997849)
 (0.536459,-0.329554)    (0.83239,0.904459)  (0.608354,-0.725537)  (0.678224,0.0258648)

A^3.1 is:
   (2.80575,-0.607662) (-1.16847,-0.00660555)    (-0.760385,1.01461)   (-0.38073,-0.106512)
     (1.4041,-3.61891)     (1.00481,0.186263)   (-0.163888,0.449419)   (-0.388981,-1.22629)
   (-2.07957,-1.58136)     (0.825866,2.25962)     (5.09383,0.155736)    (0.394308,-1.63034)
  (-0.818997,0.671026)  (2.11069,-0.00768024)    (-1.37876,0.140165)    (2.50512,-0.854429)

A^3.3 is:
  (2.83571,-0.238717) (-1.48174,-0.0615217)  (-0.0544396,1.68092) (-0.292699,-0.621726)
    (2.0521,-3.58316)    (0.87894,0.400548)  (0.738072,-0.121242)   (-1.07957,-1.63492)
  (-3.00106,-1.10558)     (1.52205,1.92407)    (5.29759,-1.83562)  (-0.532038,-1.50253)
  (-0.491353,-0.4145)     (2.5761,0.481286)  (-1.21994,0.0367069)    (2.67112,-1.06331)

A^3.7 is:
     (1.42126,0.33362)   (-1.39486,-0.560486)      (1.44968,2.47066)   (-0.324079,-1.75879)
    (2.65301,-1.82427)   (0.357333,-0.192429)      (2.01017,-1.4791)    (-2.71518,-2.35892)
   (-3.98544,0.964861)     (2.26033,0.554254)     (3.18211,-5.94352)    (-2.22888,0.128951)
   (0.944969,-2.14683)      (3.31345,1.66075) (-0.0623743,-0.848324)        (2.3897,-1.863)

A^3.9 is:
 (0.0720766,0.378685) (-0.931961,-0.978624)      (1.9855,2.34105)  (-0.530547,-2.17664)
  (2.40934,-0.265286)  (0.0299975,-1.08827)    (1.98974,-2.05886)   (-3.45767,-2.50235)
    (-3.71666,2.3874)        (2.054,-0.303)   (0.844348,-7.29588)    (-2.59136,1.57689)
   (1.87645,-2.38798)     (3.52111,2.10508)    (0.799055,-1.6122)    (1.93452,-2.44408)
Note
M has to be a matrix of float, double, long double, complex<float>, complex<double>, or complex<long double> .
See also
MatrixBase::exp(), MatrixBase::log(), class MatrixPower.

MatrixBase::matrixFunction()

Compute a matrix function.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::matrixFunction(typename internal::stem_function<typename internal::traits<Derived>::Scalar>::type f) const
Parameters
[in]Margument of matrix function, should be a square matrix.
[in]fan entire function; f(x,n) should compute the n-th derivative of f at x.
Returns
expression representing f applied to M.

Suppose that M is a matrix whose entries have type Scalar. Then, the second argument, f, should be a function with prototype

ComplexScalar f(ComplexScalar, int)

where ComplexScalar = std::complex<Scalar> if Scalar is real (e.g., float or double) and ComplexScalar = Scalar if Scalar is complex. The return value of f(x,n) should be \( f^{(n)}(x) \), the n-th derivative of f at x.

This routine uses the algorithm described in: Philip Davies and Nicholas J. Higham, "A Schur-Parlett algorithm for computing matrix functions", SIAM J. Matrix Anal. Applic., 25:464–485, 2003.

The actual work is done by the MatrixFunction class.

Example: The following program checks that

\[ \exp \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] = \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right]. \]

This corresponds to a rotation of \( \frac14\pi \) radians around the z-axis. This is the same example as used in the documentation of exp().

#include <iostream>
using namespace Eigen;
std::complex<double> expfn(std::complex<double> x, int)
{
return std::exp(x);
}
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(3,3);
A << 0, -pi/4, 0,
pi/4, 0, 0,
0, 0, 0;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix exponential of A is:\n"
<< A.matrixFunction(expfn) << "\n\n";
}
std::complex< double > expfn(std::complex< double > x, int)

Output:

The matrix A is:
        0 -0.785398         0
 0.785398         0         0
        0         0         0

The matrix exponential of A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

Note that the function expfn is defined for complex numbers x, even though the matrix A is over the reals. Instead of expfn, we could also have used StdStemFunctions::exp:

A.matrixFunction(StdStemFunctions<std::complex<double> >::exp, &B);

MatrixBase::sin()

Compute the matrix sine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::sin() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > sin(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Ma square matrix.
Returns
expression representing \( \sin(M) \).

This function computes the matrix sine. Use ArrayBase::sin() for computing the entry-wise sine.

The implementation calls matrixFunction() with StdStemFunctions::sin().

Example:

#include <iostream>
using namespace Eigen;
int main()
{
MatrixXd A = MatrixXd::Random(3,3);
std::cout << "A = \n" << A << "\n\n";
MatrixXd sinA = A.sin();
std::cout << "sin(A) = \n" << sinA << "\n\n";
MatrixXd cosA = A.cos();
std::cout << "cos(A) = \n" << cosA << "\n\n";
// The matrix functions satisfy sin^2(A) + cos^2(A) = I,
// like the scalar functions.
std::cout << "sin^2(A) + cos^2(A) = \n" << sinA*sinA + cosA*cosA << "\n\n";
}

Output:

A = 
 0.680375   0.59688 -0.329554
-0.211234  0.823295  0.536459
 0.566198 -0.604897 -0.444451

sin(A) = 
 0.679919    0.4579 -0.400612
-0.227278  0.821913    0.5358
 0.570141 -0.676728 -0.462398

cos(A) = 
  0.927728  -0.530361  -0.110482
0.00969246   0.889022  -0.137604
 -0.132574   -0.04289    1.16475

sin^2(A) + cos^2(A) = 
           1  2.22045e-16  8.88178e-16
 2.77556e-16            1  1.11022e-16
 1.66533e-16 -6.52256e-16            1

MatrixBase::sinh()

Compute the matrix hyperbolic sine.

MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::sinh() const
Eigen::AutoDiffScalar< EIGEN_EXPR_BINARYOP_SCALAR_RETURN_TYPE(Eigen::internal::remove_all_t< DerType >, typename Eigen::internal::traits< Eigen::internal::remove_all_t< DerType >>::Scalar, product) > sinh(const Eigen::AutoDiffScalar< DerType > &x)
Parameters
[in]Ma square matrix.
Returns
expression representing \( \sinh(M) \)

This function calls matrixFunction() with StdStemFunctions::sinh().

Example:

#include <iostream>
using namespace Eigen;
int main()
{
std::cout << "A = \n" << A << "\n\n";
MatrixXf sinhA = A.sinh();
std::cout << "sinh(A) = \n" << sinhA << "\n\n";
MatrixXf coshA = A.cosh();
std::cout << "cosh(A) = \n" << coshA << "\n\n";
// The matrix functions satisfy cosh^2(A) - sinh^2(A) = I,
// like the scalar functions.
std::cout << "cosh^2(A) - sinh^2(A) = \n" << coshA*coshA - sinhA*sinhA << "\n\n";
}
Matrix< float, Dynamic, Dynamic > MatrixXf

Output:

A = 
 0.680375   0.59688 -0.329554
-0.211234  0.823295  0.536459
 0.566198 -0.604897 -0.444451

sinh(A) = 
 0.682534  0.739989 -0.256871
-0.194928  0.826512  0.537546
 0.562585  -0.53163 -0.425199

cosh(A) = 
    1.07817    0.567068    0.132125
-0.00418615     1.11649    0.135361
   0.128891    0.065999    0.851201

cosh^2(A) - sinh^2(A) = 
           1            0 -2.98023e-08
 1.22935e-07            1  5.96046e-08
-1.04308e-07 -4.61936e-07            1

MatrixBase::sqrt()

Compute the matrix square root.

const MatrixSquareRootReturnValue<Derived> MatrixBase<Derived>::sqrt() const
Parameters
[in]Minvertible matrix whose square root is to be computed.
Returns
expression representing the matrix square root of M.

The matrix square root of \( M \) is the matrix \( M^{1/2} \) whose square is the original matrix; so if \( S = M^{1/2} \) then \( S^2 = M \). This is different from taking the square root of all the entries in the matrix; use ArrayBase::sqrt() if you want to do the latter.

In the real case, the matrix \( M \) should be invertible and it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed). In that case, the matrix has a square root which is also real, and this is the square root computed by this function.

The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix can then be computed directly. The cost is approximately \( 25 n^3 \) real flops for the real Schur decomposition and \( 3\frac13 n^3 \) real flops for the remainder (though the computation time in practice is likely more than this indicates).

Details of the algorithm can be found in: Nicholas J. Highan, "Computing real square roots of a real matrix", Linear Algebra Appl., 88/89:405–430, 1987.

If the matrix is positive-definite symmetric, then the square root is also positive-definite symmetric. In this case, it is best to use SelfAdjointEigenSolver::operatorSqrt() to compute it.

In the complex case, the matrix \( M \) should be invertible; this is a restriction of the algorithm. The square root computed by this algorithm is the one whose eigenvalues have an argument in the interval \( (-\frac12\pi, \frac12\pi] \). This is the usual branch cut.

The computation is the same as in the real case, except that the complex Schur decomposition is used to reduce the matrix to a triangular matrix. The theoretical cost is the same. Details are in: Åke Björck and Sven Hammarling, "A Schur method for the square root of a matrix", Linear Algebra Appl., 52/53:127–140, 1983.

Example: The following program checks that the square root of

\[ \left[ \begin{array}{cc} \cos(\frac13\pi) & -\sin(\frac13\pi) \\ \sin(\frac13\pi) & \cos(\frac13\pi) \end{array} \right], \]

corresponding to a rotation over 60 degrees, is a rotation over 30 degrees:

\[ \left[ \begin{array}{cc} \cos(\frac16\pi) & -\sin(\frac16\pi) \\ \sin(\frac16\pi) & \cos(\frac16\pi) \end{array} \right]. \]

#include <iostream>
using namespace Eigen;
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(2,2);
A << cos(pi/3), -sin(pi/3),
sin(pi/3), cos(pi/3);
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix square root of A is:\n" << A.sqrt() << "\n\n";
std::cout << "The square of the last matrix is:\n" << A.sqrt() * A.sqrt() << "\n";
}

Output:

The matrix A is:
      0.5 -0.866025
 0.866025       0.5

The matrix square root of A is:
0.866025     -0.5
     0.5 0.866025

The square of the last matrix is:
      0.5 -0.866025
 0.866025       0.5
See also
class RealSchur, class ComplexSchur, class MatrixSquareRoot, SelfAdjointEigenSolver::operatorSqrt().

Function Documentation

◆ matrix_sqrt_quasi_triangular()

template<typename MatrixType , typename ResultType >
void Eigen::matrix_sqrt_quasi_triangular ( const MatrixType arg,
ResultType &  result 
)

Compute matrix square root of quasi-triangular matrix.

Template Parameters
MatrixTypetype of arg, the argument of matrix square root, expected to be an instantiation of the Matrix class template.
ResultTypetype of result, where result is to be stored.
Parameters
[in]argargument of matrix square root.
[out]resultmatrix square root of upper Hessenberg part of arg.

This function computes the square root of the upper quasi-triangular matrix stored in the upper Hessenberg part of arg. Only the upper Hessenberg part of result is updated, the rest is not touched. See MatrixBase::sqrt() for details on how this computation is implemented.

See also
MatrixSquareRoot, MatrixSquareRootQuasiTriangular

Definition at line 182 of file MatrixSquareRoot.h.

183 {
184  eigen_assert(arg.rows() == arg.cols());
185  result.resize(arg.rows(), arg.cols());
188 }
const ArgReturnType arg() const
#define eigen_assert(x)
void matrix_sqrt_quasi_triangular_diagonal(const MatrixType &T, ResultType &sqrtT)
void matrix_sqrt_quasi_triangular_off_diagonal(const MatrixType &T, ResultType &sqrtT)

◆ matrix_sqrt_triangular()

template<typename MatrixType , typename ResultType >
void Eigen::matrix_sqrt_triangular ( const MatrixType arg,
ResultType &  result 
)

Compute matrix square root of triangular matrix.

Template Parameters
MatrixTypetype of arg, the argument of matrix square root, expected to be an instantiation of the Matrix class template.
ResultTypetype of result, where result is to be stored.
Parameters
[in]argargument of matrix square root.
[out]resultmatrix square root of upper triangular part of arg.

Only the upper triangular part (including the diagonal) of result is updated, the rest is not touched. See MatrixBase::sqrt() for details on how this computation is implemented.

See also
MatrixSquareRoot, MatrixSquareRootQuasiTriangular

Definition at line 206 of file MatrixSquareRoot.h.

207 {
208  using std::sqrt;
209  typedef typename MatrixType::Scalar Scalar;
210 
211  eigen_assert(arg.rows() == arg.cols());
212 
213  // Compute square root of arg and store it in upper triangular part of result
214  // This uses that the square root of triangular matrices can be computed directly.
215  result.resize(arg.rows(), arg.cols());
216  for (Index i = 0; i < arg.rows(); i++) {
217  result.coeffRef(i,i) = sqrt(arg.coeff(i,i));
218  }
219  for (Index j = 1; j < arg.cols(); j++) {
220  for (Index i = j-1; i >= 0; i--) {
221  // if i = j-1, then segment has length 0 so tmp = 0
222  Scalar tmp = (result.row(i).segment(i+1,j-i-1) * result.col(j).segment(i+1,j-i-1)).value();
223  // denominator may be zero if original matrix is singular
224  result.coeffRef(i,j) = (arg.coeff(i,j) - tmp) / (result.coeff(i,i) + result.coeff(j,j));
225  }
226  }
227 }
int i
EIGEN_DEFAULT_DENSE_INDEX_TYPE Index
std::ptrdiff_t j