Solving polynomial systems by matrix computations
Let A= K[x]/I be the quotient algebra by the ideal I=(f1,..., fm), which is a finite K-dimensional vector space iff Z(I) is 0 dimensional.



1  Solving from the operators of multiplication

For each a in A, consider:
Ma: A -> A
b -> a  b
defining the operator of multiplication by a in A, and the K-linear application associated to Ma:
M
t
 
a
: A
#
 
->
A
#
 
L -> a · L
The matrix of Mat in the dual basis of A# is the transposed of the matrix of Ma. So they share their eigenvalues.

Our matrix approach to solve polynomial systems is based on the following theorem (for more details and proof, see [BMr98]) :


Theorem 1.1   Assume that Z(I) is zero-dimensional.
  1. The eigenvalues of the linear operator Mp (resp. Mpt) are {p(z1), ...,p(zd)}.
  2. The common eigenvectors of (Mxit) are (up to a scalar) 1z1, ..., 1zd.
  3. When m=n, the common eigenvectors of (Mxi) are (up to a scalar) J(x)e1, ..., J(x)ed, where J(x) is the Jacobian of p1, ..., pn.

By [BMPissac98], theorem 2.1, we know that there exists a basis of A such that for all a in A the matrix Ma of Ma in this basis is of the form
Ma = æ
ç
ç
ç
ç
è
Ma,1   0
  ·  
 · 
  ·
 
0   Ma,d
ö
÷
÷
÷
÷
ø
where Ma,i is of the form
Ma,i= æ
ç
ç
ç
ç
è
a(zi)   *
  ·  
 · 
  ·
 
0   a(zi)
ö
÷
÷
÷
÷
ø
.
In the case of a simple root zi, we have Ma,i=(a(zi)).

For each z in Kn we introduce,
1
 
z
  :  R
-> K
p -> p(z)
We note that 1z in A# if and only if z in Z(I) .

In the case of multiple roots it would be necessary to introduce some additional linear forms involving higher order differential forms. But hereafter for simplicity we only consider simple roots. We express now some propositions and remarks:

First, note that for any pair a, b in A, we have
M
t
 
a
(1
 
zi
)(b)= 1
 
zi
(a b) = a(zi) b(zi)= a(zi)1
 
zi
(b),
so that 1zi is an eigenvector of Mat, for the eigenvalue a(zi). Moreover for any pair a,b in A, the multiplication maps Ma, Mb commute with each other. It follows that they share common eigenvector spaces. And so, the common eigenvectors of Mat for all a in A are the non-zero multiples of 1zi, for i=1,..., d ([BMPissac98], proposition 2.3).
If a root zi is simple, the eigenvector of Mat associated to the eigenvalue a(zi) is 1zi. The coordinates of 1zi in the dual basis of (xa1,..., xaD) (any monomial basis of A) are (zia1,..., ziaD), that is the evaluation of this basis of A in the root zi. This yields the following algorithm:

Algorithm of computing the simple roots of a polynomial system f1= ···= fm=0


  1. Compute the transposed of the matrix of multiplication Ma for a in A.
  2. Compute its eigenvectors vi= (vi,1, vi,x1, ..., vi,xn, ...) for i=1,...,d.
  3. For i=1,...,d, compute and output
    zi = (
    v
     
    i,x1

    vi,1
    , ...,
    v
     
    i,xn

    vi,1
    ).



Implementation:
mpoly/solve/EigenMult.H

template < class M>  VAL< MatDense< lapack< AlgClos< typename M::value_type> ::TYPE >  >  *>  Solve(M & A, int n, Eigen mth)
Compute the simple roots from the first n+1 first coordinates of the eigenvectors of the matrix of multiplication A. It is assumed that the roots are simple and that the first n+1 elements of the basis of the quotient ring are 1, x1, ..., xn.
<B><PRE>
 E =Solve(M,n,Eigen()) 
</PRE></B>
where M is a matrix of multiplication and n the number of variable. The matrix E will be the matrix of complex coordinates of the (simple) roots.

template < class M>  inline VAL< MatDense< lapack< AlgClos< typename M::value_type> ::TYPE >  >  *>  Solve(M & A, const array1d< typename M::size_type>  & t, Eigen method)
Compute the simple roots from n+1 coordinates of the eigenvectors of the matrix of multiplication A. It is assumed that the roots are simple. The array t is the array of positions of the monomials 1,x1,...,xn. Solve, Eigen



2  Solving from matrix of univariate polynomials

To reduce the resolution of our system to a univariate and eigenvector problems, several methods can be applied: In this approach the matrix may be nonlinear in the hidden variable, so M is matrix polynomial M(xn)=Md xnd+···+M1 xn+M0, for some d ³ 1. Our problem reduces to the following eigenproblem:
æ
ç
ç
ç
ç
ç
ç
ç
ç
è
é
ê
ê
ê
ê
ê
ê
ë
0 I 0 ···
·
·
·
·
·
·
·  
 · 
  ·
0
0 0 ··· I
-M0t -M1t ··· -Md-1t
ù
ú
ú
ú
ú
ú
ú
û
-b é
ê
ê
ê
ê
ê
ê
ê
ê
ë
I 0 ··· 0
0 ·  
 · 
  ·
0 ···
·
·
·
0 I 0
0 ··· 0 Mdt
ù
ú
ú
ú
ú
ú
ú
ú
ú
û
ö
÷
÷
÷
÷
÷
÷
÷
÷
ø
é
ê
ê
ê
ê
ê
ë
w
b w
·
·
·
bd-1 w
ù
ú
ú
ú
ú
ú
û
= 0
If Md is numerically singular, we may change variable xn to (t1y+t2)/ (t3y+t4).

Implementation:
mpoly/solve/EigenHidden.H

template< class Mat, class MatPol>  VAL< Mat*>  Solve(const MatPol & MP, Compagnon method)
Compute the real solutions of M(x).v=0 by transforming this problem into an eigenvector problem (A-x Bw =0. It is assumed that the pencil has only simple real eigenvectors. The case of multiple roots is not handled for the moment. This case can be detected by checking if all the elements of the first line are 1 or not.
<B><PRE>
    E = Solve$<$ MatC$>$ (Mx, Compagnon()) 
    </PRE></B>
where MatC is a type of matrix with complexe coefficients.



3  Computing the matrix of multiplication from resultant matrices

As we have seen, the matrix of multiplication Mf0 by an element f0 in A yields a way to recover the roots of the system. Usually, this matrix is not available directly from the input equations. However it can be computed from a Sylvester-like matrix S (we will see later that we can take for S a resultant matrix, in the generic case) satisfying the following properties:
Hypotheses 3.1   The matrix S is a square matrix and have the following form :
S = æ
è
A B
C D
ö
ø
such that
  1. its rows are indexed by monomials (xa)a in F,
  2. the set of monomials B0 = (xa)a in E0 indexing the rows of the block (A B) is a basis of A=R/(f1,...,fm),
  3. the columns of (
    A
    C
    ) represents the elements xa f0 for a in E0, expressed as linear combinations of the monomials (xb)b in F,
  4. the columns of (
    B
    D
    ) represent some multiples of the polynomials f1, ...,fm, expressed as linear combinations of the monomials (xb)b in F,
  5. the block D is invertible.

For any matrix S satisfying these hypotheses, we may obtain the map of multiplication by f0 modulo f1,...,fn as, in the basis B0=(xa)a in E0 of A, Mf0 is the Schur complement of D in S:
M
 
f0
= A - B  D-1  C.
(for more details see [BMPissac98]).

Implementation:
mpoly/solve/EigenRslt.H

template< class Mat, class LPol>  VAL< MatDense< lapack< AlgClos< typename Mat::value_type> ::TYPE >  >  *>  Solve(const LPol & l, Projectiv method)
Solve the system of equations l, by computing the tranposed matrix of multiplication by f0 modulo l as the Schur complement of the Macaulay matrix of the n+1 polynomials. It is assumed that the monomials 1,x1,...,xn are smaller than any of the other monomials of degree ³ 2.

The following function can be used for this purpose is (see X):
   M= USchur(S);
where S is a matrix satisfying the hypotheses X (eg. a resultant matrix of a generic system).



4  An example based on resultant matrices

#include <list>
#include "mpoly.H"
#include "mpoly/resultant.H"
#include "lapack.H"
#include "linalg.H"

typedef Monom<double, dynamicexp<'x'> > Mon;
typedef MPoly<vector<Mon>, Dlex<Mon> > Pol;
typedef MatDense<lapack<double> > Matr;
typedef VectStd<array1d<double> > Vect;
typedef VectStd<array1d<complex<double> > > VectC;

int main (int argc, char **argv)
{

  int n;
  cin >>n;
  
  array1d<Pol> l(n);
  for(int i=0; i< n;i++) cin >> l[i];

  cout <<"Polynomes:"<<endl;
  for(int i=0; i< n;i++) cout<<l[i]<<";"<<endl;

  int d=1;
  for(int i=1; i< n;i++) d*=Degree(l[i]);
  cout <<"Bezout bound:"<<d<<endl;

  Matr S=Macaulay<lapack<double> >(l,'N');
  cout <<"Macaulay matrix of size "<< S.nbrow()<<endl;

  for(unsigned int i=d;  i<S.nbrow(); i++) 
    if(S(i,i)==0) 
      cout<<"Probleme d'ordre des polynomes "<<i<<" "<<i <<endl; 
  
  Matr D = S(Range(d,S.nbrow()-1),Range(d,S.nbcol()-1));
   
  cout <<"Lower diag. block of size "<< D.nbrow()<<endl;

  cout <<"Singular values of D: "<<endl;
  Vect Sv= Svd(D);
  cout << Sv<<endl;

  //  cout <<"Singular values of S: "<<endl;
  //  cout <<Eval<Vect>(Svd(S))<<endl;
  
  Matr U= USchur(S,d);
  cout <<"Eigenvalues of the Schur complement: "<<endl;
  cout << Eigenval(U)<<endl;
  
}