Solving by controlled iterative methods




1  The classical power method and inverse power method

We have seen above how reducing the resolution of a polynomial system to eigenvector computations. Now, let us have a look to the classical (inverse) power method for matrices (for more details see [Wilkinson65], [BMPissac98]).

Let Ma be the matrix of multiplication by a in a basis B of A such that a(zi)¹ 0 for i=1,...,d. By its definition, Ma is invertible and a is invertible in A.

Let v0=w0 be the coordinate vector of an element of A# in the dual basis of B.
wk =
1

||wk-1||
Mat wk-1 et   vk =
1

||vk-1||
(Mat)-1vk-1,
 k=1, 2,..., respectively.
If z is a simple root maximizing (resp. minimizing) |a| and if w=v=(za) is the monomial basis evaluated in z, we have
limk-> infty wk=w (resp. limk-> infty vk=v).

This gives us a way to select an eigenvector of Mat corresponding to the root maximizing (resp. minimizing) the modulus of a on Z(I).
Remarks :

2  Implicit iterative method

First let us recall that this method allows to compute a root of our system that minimizes the modulus of f0. An advantage versus the power method is the possible use of shifts of the variable, that is the transformation from the matrix Mf0-1 to the matrix (Mf0 - s I)-1, where s is closed to the selected eigenvalue of Mf0, for the convergence acceleration (see [GLVL96]).

So under the hypotheses X and for S invertible of the form
S-1 = æ
è
U V
Z W
ö
ø
We have Mf0 invertible, and U= Mf0-1.

This gives the following iterative algorithm,

Implicit inverse power method

.
  1. Choice of a random vector u0.
  2. Solving the system St [
    vn+1
    vn+1'
    ]=[
    un
    0
    ].
  3. If vn+1,1 is the first coordinate of vn+1 and if it is not zero, then un+1 = 1/ vn+1,1vn+1, else stop.



We obtain, if z in Z(I) is a simple root minimizing |f0| and s = (za)a in E0,
limn -> infty un= s.

To accelerate this algorithm, we will first compute the LU-decomposition of the matrix St. Moreover the matrix St is sparse since the number of non-zero terms per columns is bounded by the number of monomials f0, ... ,fm, which is small compared to the size of the matrix. This allows to perfom each step of the previous algorithm in O(C  N + Nlog2(N)) arithmetic operations, where C is the bound on the number of arithmetic operations required to multiply S by a vector. Practically, if S has O(N) non-zero coefficients, then C = O(N) and each step can be performed by using O(N2) operations versus known bounds of order N3. Moreover the matrix S and its block A, B, C, D are structured matrices, structure which can be used to simplify multiplication of such a matrix by a vector (see [MoPa99strmt]). But in this work, we do not use this structure, nevertheless we may multiply such matrices by vectors and solve linear system effectively, based on exploiting the sparsity and the previous remark.



3  Results

The following table contains the results of our experiment for the implicit inverse method using the structures implemented during this work.

Many of the following examples have two results, one, *G, by using the GMRES solver ([IML]) developped by R. Pozo and his library SparseLib++ ([SPL]), the other, by *U using a direct method with UMFPACK routines ([UMF]) for solving the sparse linear system S x = b.
The examples s22G and s22U are both the following system ([CLO92]) :
-11+2x2+3y2 = 0
-3+x2-y2 = 0

s44* are systems of two equations in two variables, both of degree 4. s4422* are systems of four equations in 4 variables, of degree 4, 4, 2 and 2.
sph* are the interchapter of a sphere with an other quadric and a plane ([CLO92]).
x2 + y2+z2 = 4
xy + z2 - 1 = 0
x + y - z = 0

Examples cyc3* are the following, for n = 3 : Cyclic n-roots (see [PolTS])
x1+x2+···+xn = 0
x1x2+ x2 x3 + ···+xnx1 = 0
·
·
·
x1··· xn-1+ x2··· xn+···+xnx1··· xn-2=0
x1x2··· xn = 1

Results kat6U (resp. kat8U) are, for n=6 (resp. n=8): Katsura(N) (see [PolTS])
um-
N
S
i=-N
uium-i ; m=-N+1..N+1
1-
N
S
i=-N
ui
u-m=um ; m=1..2N-1
um=0 ; m=N+1..2N-1

The last result kruppa corresponds to the Kruppa equations of a reconstruction problem in Computater Vision (see [faugeras:93]) reduced to an overconstrained system of 6 quadrics in a space of dimension 5. The way to find the root was explained above.

  N S D n k T
s22G 10 30 4 2 2 0.020s
s22U 10 30 4 2 2 0.010s
s44G 36 108 16 2 4 0.020s
s44U 36 108 16 2 4 0.010s
s4422G 715 3449 64 4 46 149s
s4422U 715 3449 64 4 4 0.67s
sphG 20 67 4 3 9 0.060s
sphU 20 67 4 3 9 0.060s
cyc3G 35 117 6 3 17 0.080s
cyc3U 35 117 6 3 17 0.040s
kat6U 1716 26460 64 6 13 22.53s
kat8G 24310 556086 256 8 46 125mn
kruppa 792 15840 1 5 1 1.170s
In this table, N is the dimension of the matrix S (that is, the matrix has size N× N), S is the number of non-zero entries of the matrix S, D is the dimension of A, n is the number of variables, k is the number of iterations required for an error less than e = 10-4, and T is the total time of the computation, it includes the time of construction of the resultant matrix (for instance, 153.91 seconds for the test kat8G).