Linear algebra package

Matrix decomposition

The general interface for matrix decomposition is given by the function:
template<class Mthd, class Mat> Mat decomp(const Mat & m);

The first parameter specifies the type of decomposition. It can be:

• LU for Lower-Upper triangular decomposition.
• Bareiss for Lower-Upper triangular decomposition based on Bareiss method. It applies the following pivoting scheme:

where is the transformed matrix at step and is the initial matrix. At the end of this process, after some permutation of the rows, the matrix is upper triangular. If A is a square matrix, the last entry on the diagonal at the end of the process is the determinant of the matrix . Moreover, if has its coefficients in a ring, it is the same for the matrices , since the previous division is exact

• QR for Orthogonal-Upper triangular decomposition.
Here are examples of its use:
  matrix_t T = Decomp<LU>(M);
matrix_t R = Decomp<QR>(M);

synaps/linalg/decomp.h

Solving a linear system

Several methods for solving linear systems

where and are and matrices. The result is a matrix. On can use the following methods

• LU for solving using LU decomposition with column pivoting.
• LS for least square solving.
Here are examplesof its use.
  matrix_t B = ...;
matrix_t X = solve<LU>(A,B);
matrix_t Y = solve<LS>(A,B);

synaps/linalg/solve.h

Determinant

Different methods for computing the determinant of a matrix are discribed here: Gauss, Bareiss or Berkowitz methods.

• LU for Gauss method uses classical pivoting techniques (ie. LU decomposition) to compute the determinant.
• Bareiss for Bareiss method .
• Berkowitz method is based on the following recursion formula, based on Cayley-Hamilton identity:

• A mixed strategy, which expands the determinant with respect to the first row and Bareiss method is applied to the subdeterminants.
synaps/linalg/Det.h

SVD

 Theorem: [GLVL96] For a real matrix of size , there exists two orthogonal matrices such that are called the th singular value of A, and are respectively the th left and right singular vectors.

 Definition: For a matrix , the numerical rank of A is defined by:

 Theorem: [GLVL96] If is of rank and then

The singular value decomposition of a matrix is the decomposition of the form

where and are orthognal matrices , and is a diagonal matrix such that . The values are called the singular values of .

  vector_t s =  Svd(m);

synaps/linalg/svd.h

Rank

Theorem [*] shows that the smallest singular value is the distance between A and all the matrices of rank . Thus if then

As a result, in order to determine the rank of a matrix , we find the singular values so that

The function

  Rank(A);

computes the rank of a matrix, as described above with a default value for .

synaps/linalg/rank.h

Eigenvalues, eigenvectors

The computation of the eigenvalues and eigenvectors of a matrix are performed by the functions eigen and eigen_vect. They also provide generalised eigenvalues and eigenvectors, for pairs of matrices (a,B). If the internal representation of type lapack::rep2d<double>, the routine dgeeg or dgegv are called.
  VectDse<std::complex<double> > v = eigen(A);
VectDse<double>                w = eigen(A, Real());
VectDse<double>                w = eigen(A,B);

synaps/linalg/eigen.h
synaps/linalg/FFT.h