Solving linear least squares systems

This page describes how to solve linear least squares systems using Eigen. An overdetermined system of equations, say Ax = b, has no solutions. In this case, it makes sense to search for the vector x which is closest to being a solution, in the sense that the difference Ax - b is as small as possible. This x is called the least square solution (if the Euclidean norm is used).

The three methods discussed on this page are the SVD decomposition, the QR decomposition and normal equations. Of these, the SVD decomposition is generally the most accurate but the slowest, normal equations is the fastest but least accurate, and the QR decomposition is in between.

Using the SVD decomposition

The solve() method in the BDCSVD class can be directly used to solve linear squares systems. It is not enough to compute only the singular values (the default for this class); you also need the singular vectors but the thin SVD decomposition suffices for computing least squares solutions:

Example:Output:
#include <iostream>
#include <Eigen/Dense>
int main()
{
std::cout << "Here is the matrix A:\n" << A << std::endl;
std::cout << "Here is the right hand side b:\n" << b << std::endl;
std::cout << "The least-squares solution is:\n"
<< A.bdcSvd(Eigen::ComputeThinU | Eigen::ComputeThinV).solve(b) << std::endl;
}
Array< int, 3, 1 > b
MatrixXcf A
static const RandomReturnType Random()
Definition: Random.h:114
BDCSVD< PlainObject, Options > bdcSvd() const
The matrix class, also used for vectors and row-vectors.
Definition: Matrix.h:182
int main(int, char **)
Definition: class_Block.cpp:18
@ ComputeThinV
Definition: Constants.h:401
@ ComputeThinU
Definition: Constants.h:397
Here is the matrix A:
  0.68  0.597
-0.211  0.823
 0.566 -0.605
Here is the right hand side b:
 -0.33
 0.536
-0.444
The least-squares solution is:
-0.67
0.314

This is example from the page Linear algebra and decompositions . If you just need to solve the least squares problem, but are not interested in the SVD per se, a faster alternative method is CompleteOrthogonalDecomposition.

Using the QR decomposition

The solve() method in QR decomposition classes also computes the least squares solution. There are three QR decomposition classes: HouseholderQR (no pivoting, fast but unstable if your matrix is not rull rank), ColPivHouseholderQR (column pivoting, thus a bit slower but more stable) and FullPivHouseholderQR (full pivoting, so slowest and slightly more stable than ColPivHouseholderQR). Here is an example with column pivoting:

Example:Output:
cout << "The solution using the QR decomposition is:\n"
<< A.colPivHouseholderQr().solve(b) << endl;
const ColPivHouseholderQR< PlainObject, PermutationIndex > colPivHouseholderQr() const
Matrix< float, Dynamic, Dynamic > MatrixXf
DynamicĂ—Dynamic matrix of type float.
Definition: Matrix.h:501
Matrix< float, Dynamic, 1 > VectorXf
DynamicĂ—1 vector of type float.
Definition: Matrix.h:501
The solution using the QR decomposition is:
-0.67
0.314

Using normal equations

Finding the least squares solution of Ax = b is equivalent to solving the normal equation ATAx = ATb. This leads to the following code

Example:Output:
cout << "The solution using normal equations is:\n"
<< (A.transpose() * A).ldlt().solve(A.transpose() * b) << endl;
TransposeReturnType transpose()
Definition: Transpose.h:184
The solution using normal equations is:
-0.67
0.314

This method is usually the fastest, especially when A is "tall and skinny". However, if the matrix A is even mildly ill-conditioned, this is not a good method, because the condition number of ATA is the square of the condition number of A. This means that you lose roughly twice as many digits of accuracy using the normal equation, compared to the more stable methods mentioned above.