next up previous contents Next: Optimization Up: ALIAS-C++ Previous: Parametric polynomials and eigenvalues

  • La page de J-P. Merlet
  • J-P. Merlet home page
  • La page Présentation de HEPHAISTOS
  • HEPHAISTOS home page
  • La page "Présentation" de l'INRIA
  • INRIA home page

    Subsections


    Linear algebra


    Calculating determinant

    Scalar and interval matrix

    The determinant of a scalar matrix may be calculated with the procedure ALIAS_MRINVD that is also used to determine the inverse of the matrix, see section 7.2.

    The determinant of an interval matrix may be calculated by using Gaussian elimination with the procedure

     
    int ALIAS_Det_By_Gaussian_Elim(INTERVAL_MATRIX &B, INTERVAL &DET)
    
    This procedures returns 1 if the determinant has been successfully calculated and is returned in DET, 0 otherwise.

    There are five main procedures to compute an interval evaluation of the determinant of an interval square matrix:

     
    INTERVAL Slow_Determinant(INTERVAL_MATRIX &A)
    INTERVAL Slow_NonZero_Determinant(INTERVAL_MATRIX &A)
    INTERVAL Medium_Determinant(INTERVAL_MATRIX &A)
    INTERVAL Fast_Determinant(INTERVAL_MATRIX &A)
    INTERVAL VeryFast_Determinant(INTERVAL_MATRIX &A)
    
    These procedures differ only by the computation time (large for the Slow procedure as soon as the dimension of the matrix is larger than 5) and the accuracy of the interval evaluation (which is the worst for the Fast procedure). The Medium_Determinant procedure is a good compromise between efficiency and speed. The procedure Slow_NonZero_Determinant is a special occurrence of the procedure Slow_Determinant which will return as soon it has determined that the interval evaluation of the determinant does not include 0. Note that if the flag ALIAS_Use_Gaussian_Elim_In_Det is set to 1 (the default value being 0), then these procedures will also use the Gaussian elimination to improve the evaluation of the determinant.

    There are also special version of the previous procedures:

     
    INTERVAL Slow_Determinant22(INTERVAL_MATRIX &A,
                  INTERVAL (* TheDet22)(INTEGER_VECTOR &,INTEGER_VECTOR &,INTERVAL_VECTOR &),
                  INTERVAL_VECTOR &Input)
    INTERVAL Medium_Determinant22(INTERVAL_MATRIX &A,
                 INTERVAL (* TheDet22)(INTEGER_VECTOR &,INTEGER_VECTOR &,INTERVAL_VECTOR &),
                 INTERVAL_VECTOR &Input)
    INTERVAL Fast_Determinant22(INTERVAL_MATRIX &A,
                  INTERVAL (* TheDet22)(INTEGER_VECTOR &,INTEGER_VECTOR &,INTERVAL_VECTOR &),
                  INTERVAL_VECTOR &Input)
    
    They differ because the calculation of all the dimensions 2 minors are computed using the TheDet22 procedure. The syntax of this procedure is:
     
    INTERVAL TheDet22(INTEGER_VECTOR &ROW,INTEGER_VECTOR &COL,INTERVAL_VECTOR &Input)
    
    The integer vectors ROW and COL will have all components equal to 0 except for 2 element, which indicate which rows and columns are used to produce the 2$\times$2 determinant. A dimension 2 matrix will be obtained from the initial matrix by removing all the rows and columns whose index in the integer vectors are 0. The procedure must return an interval corresponding to the determinant of this dimension 2 matrix, the unknowns having for range the value indicated by Input.

    A more general implementation of the previous procedure is

     
    INTERVAL Determinant22(INTERVAL_MATRIX &A,int Minor,int Row,
         INTERVAL (* TheDet22)(INTEGER_VECTOR &,INTEGER_VECTOR &,INTERVAL_VECTOR &),
         INTERVAL_VECTOR &Input)
    
    In this procedure Minor indicates the size of the minor that can be computed using the procedure TheDet22 and Row indicates if an expansion by row (1) or by column (0) is used.

    There are also procedures to compute the derivatives of a determinant

     
    INTERVAL Fast_Derivative_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG);
    INTERVAL Medium_Derivative_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG);
    INTERVAL Slow_Derivative_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG);
    
    A is a matrix that contain the interval evaluation of the component of the matrix and AG is a matrix that contain the interval evaluation of the derivatives of the components of the matrix. In other words if $M=((a_{ij}))$ then ${\tt A}=(([\underline{a_{ij}},
\overline{a_{ij}}]))$ and if ${\tt AG}=(([[\underline{d a_{ij}/dx},
\overline{d a_{ij}/dx}]]))$, then the procedure will return the interval evaluation of $d \vert M\vert/dx$.

    If a procedure TheDet22 for calculating the minor is available you may use

     
    INTERVAL Derivative_Determinant22(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG,
              int Minor,
              INTERVAL (* TheDet22)(INTEGER_VECTOR &,INTEGER_VECTOR &,INTERVAL_VECTOR &),
              INTERVAL_VECTOR &Input)
    

    There are also procedures to compute the second order derivatives of a determinant

     
    INTERVAL Fast_Hessian_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG, INTERVAL_MATRIX &AH)
    INTERVAL Medium_Hessian_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG, INTERVAL_MATRIX &AH)
    INTERVAL Slow_Hessian_Determinant(INTERVAL_MATRIX &A,INTERVAL_MATRIX &AG, INTERVAL_MATRIX &AH)
    
    For a matrix $M=((a_{ij}))$ then ${\tt A}=(([\underline{a_{ij}},
\overline{a_{ij}}]))$ and if ${\tt AG}=(([[\underline{d a_{ij}/dx},
\overline{d a_{ij}/dx}]]))$ and
    ${\tt AH}=(([[\underline{d^2 a_{ij}/dx dy},
\overline{d^2 a_{ij}/dx dy}]]))$ then the procedure will return the interval evaluation of $d^2 \vert M\vert/dx dy$.

    You may also obtain an upper bound for the determinant of a matrix by using the procedure:

     
    double Hadamard_Determinant(MATRIX &J)
    
    where we use the Hadamard bound of the determinant $D$ of a $n \times
n$ matrix $J$

    \begin{displaymath}
D \le \prod_{i =1}^{i =n}\sqrt{\sum_{j=1}^{j=n}J_{ij}^2}
\end{displaymath}

    A similar procedure exists for an interval matrix
     
    INTERVAL Hadamard_Determinant(INTERVAL_MATRIX &J)
    

    The determinant of a scalar matrix may be calculated with

     
    double Determinant(MATRIX &J)
    


    Polynomial matrix

    A polynomial matrix is assumed is assumed here to be a matrix whose elements are univariate polynomial in the same variable. The determinant of such matrix (which is a polynomial) may be computed with the procedure

     
    int Determinant_Characteristic(POLYNOMIAL_MATRIX A,
                      INTERVAL_VECTOR &Coeff)
    
    where Note that this procedure is safe: even for a scalar matrix the coefficients of the determinant will always include the coefficient of the exact determinant.


    Matrix inverse

    The inverse of a scalar matrix may be used by using the Inverse function of the BIAS/Profil package or with

     
    void ALIAS_MRINVD(VECTOR &A,VECTOR &B,int N,int *KOD,double *DET,double EPS,
                      INTEGER_VECTOR &IL,INTEGER_VECTOR &IC)
    
    where

    The inverse of an interval matrix may be defined as the set of matrices corresponding to the inverse of a matrix included in the set defined by the interval matrix. This set cannot usually be computed exactly but a set of matrices guaranteed to include the inverse interval matrix may be computed. The following procedure allows one to compute such overestimation.

     
    int Inverse_Interval_Matrix(int Dim,int cond,INTERVAL_MATRIX &Jac,INTERVAL_MATRIX &InvJac)
    

    where Dim is the dimension of the matrix Jac. The flag cond has to be set to 1 if pre-conditioning is used, 0 otherwise. Pre-conditionning will usually leads to a smaller overestimation but is more computer intensive. This procedures returns 1 if it has been able to compute the inverse, 0 otherwise.

    Note that using this procedure should not be used for solving an interval linear system.

    Solving systems of linear equations

    Mathematical background

    Let us assume that we have an $n \times
n$ interval linear system:
    \begin{displaymath}
{\bf A}({\bf X}). Y = b({\bf X})
\end{displaymath} (7.1)

    where the ${\bf A}, b$ elements are functions of the unknowns ${\bf X}$. The above equation describes a family of linear system and the enclosure of this interval linear system is a box that enclose the solutions for $Y$ of each member of the family of linear systems.

    A classical scheme for finding the enclosure is to use an interval variant of the Gauss elimination scheme [18].

    When the unknowns lie in given ranges we may compute an interval evaluation ${\bf A}^{(0)}$ of ${\bf A}$ and an interval evaluation $b^{(0)}$ of $b$. The Gauss elimination scheme may be written as [18]

        $\displaystyle A_{ik}^{(j)}=A_{ik}^{(j-1)}-A_{ij}^{(j-1)}A_{jk}^{(j-1)}/A_{jj}^{(j-1)}
  \forall i  {\rm with} j>k$ (7.2)
        $\displaystyle b_i^{(j)}=b_i^{(j-1)}-A_{ij}^{(j-1)}b_j^{(j-1)}/A_{jj}^{(j-1)}$ (7.3)

    The enclosure of the variable $Y_j$ can then be obtained from $Y_{j+1},\ldots,Y_n$ by
    \begin{displaymath}
Y_j=(b_j^{(j-1)}-\sum_{k>j} A_{jk}^{(j-1)}Y_k)/A_{jj}^{(j-1)}
\end{displaymath} (7.4)

    Note that the elements at iteration $j$ can be computed only if the interval $A_{jj}^{(j-1)}$ does not include 0.

    A drawback of this scheme is that the family of linear systems that will be obtained for all instances of ${\bf X}$ is usually a subset of the family of linear systems defined by ${\bf A}^{(0)}$, $b^{(0)}$, as we do not take into account the dependency of the elements of ${\bf A}, b$. Furthermore the calculation in the scheme involves products, sums and ratio of elements of ${\bf A}, b$ and their direct interval evaluation again do not take into account the dependency between the elements. Hence a direct application of the Gauss scheme will usually lead to an overestimation of the enclosure.

    A possible method to reduce this overestimation is to consider the system

    \begin{displaymath}
J{\bf A}({\bf X}). Y = J b({\bf X})
\end{displaymath}

    where $J$ is an arbitrary $n \times
n$ matrix. The above system has the same solutions than the system (7.1) but for some matrix $J$ the enclosure of the above interval system may be included in the enclosure obtained by the Gauss elimination scheme on the initial system: this is called a pre-conditioning of the initial system. It is usually considered that a good candidate for $J$ is the matrix whose elements are the mid-point of the ranges for the elements of ${\bf A}$.

    But even with a possible good $J$ the dependency between the elements of ${\bf A}, b$ are not taken into account and hence the overestimation of the enclosure may be large.

    A first possible way to reduce this overestimation is to improve the interval evaluation of ${\bf A}^{(0)}$, $b^{(0)}$ by using the derivatives of their elements with respect to ${\bf X}$ and the procedure described in section 2.4.2.3. Note that the procedures necessary to compute the elements of ${\bf A}^{(0)}$, $b^{(0)}$ and their derivatives may be obtained by using the MakeF, MakeJ procedures of ALIAS-maple.

    But to improve the efficiency of the procedure it must be noticed that at iteration $j$ an interval evaluation of the derivatives of $A_{ik},
b_i$ with respect to ${\bf X}$ may be deduced for the derivatives of the elements computed at iteration $j-1$. As we have the derivatives of the elements at iteration 0 we may then deduce the derivatives of the elements at any iteration and use these derivatives to improve the interval evaluation of these elements (see sections 2.4.1.1 and 2.4.2.3).

    Implementation

    The simplest Gaussian elimination scheme is implemented as:

     
    int Gauss_Elimination(INTERVAL_MATRIX &Ain,
                         INTERVAL_VECTOR &b,INTERVAL_VECTOR &bout)
    
    where The procedure returns 1 if it has succeeded in finding the enclosure, 0 otherwise (one of the interval pivot in the Gaussian scheme includes 0).

    This computation for the initial system and a pre-conditioned system has been implemented in the procedure:

     
    int Gauss_Elimination_Derivative(MATRIX &Cond,INTERVAL_MATRIX &Ain,
                  INTERVAL_MATRIX &ACondin,
                   const INTERVAL_VECTOR bin, 
                   const INTERVAL_VECTOR bCondin, 
                  INTERVAL_VECTOR &bout, 
                  INTERVAL_VECTOR & Param,
                  INTERVAL_VECTOR (* Func)(int l1, int l2, INTERVAL_VECTOR & v_IS),
                  INTERVAL_MATRIX (* JFunc)(int l1, int l2, INTERVAL_VECTOR & v_IS),
                  INTERVAL_VECTOR (* bFunc)(int l1, int l2, INTERVAL_VECTOR & v_IS),
                  INTERVAL_MATRIX (* JbFunc)(int l1, int l2, INTERVAL_VECTOR & v_IS))
    
    where

    This procedure will return 1 if the Gauss elimination scheme has been completed, 0 otherwise. It will return in general a much more better enclosure than the classical interval Gauss elimination scheme.

    For example consider the system

    \begin{displaymath}
\left[ \begin{array}{cc}
x & y \\
x & x
\end{array}
\ri...
... Y =
\left[
\begin{array}{c}
x \\
x
\end{array}
\right]
\end{displaymath}

    for $x$ in [3,4] and $y$ in [1,2]. Using the classical Gauss elimination scheme the enclosure of the solution is:

    \begin{displaymath}
Y_1=[0.76923,10]    Y_2=[-13,-0.0769]
\end{displaymath}

    while if we use the derivatives we get

    \begin{displaymath}
Y_1=[0.9166,2.6666]    Y_2=[-2,-.66666]
\end{displaymath}

    while the solutions are $Y_1=(y+x)/x$ (which has an enclosure of [1.25,1.666]), $Y_2=-1$.

    The ALIAS-Maple procedure LinearBound implements this calculation.


    Regularity of parametric interval matrices

    A parametric interval matrix is a matrix whose elements are functions of unknowns whose values lie in some pre-defined ranges. It is therefore a matrix that has a higher structure than the classical interval matrix (i.e. matrices whose elements are intervals) as there may be dependency between the elements of the matrix.

    A sophisticated procedure to check for the regularity of a parametric matrix is based on the following scheme: the sign of the determinant is calculated for a point value of the input parameters (say positive), then the algorithm try to find another input value such that the sign of the corresponding determinant is negative.

    Implementation

     
    int Matrix_Is_Regular(int dimA,
          INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
          int (* A_Cond)(int dimA,INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),INTERVAL_VECTOR & v_IS,INTERVAL_MATRIX &K),
          INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
          INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
          INTERVAL Determinant_Matrix(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
          INTERVAL Determinant_A_Cond_Left(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
          INTERVAL Determinant_A_Cond_Right(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
          int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
          int Iteration,
          INTERVAL_VECTOR &X,
          int (* Simp)(int dimA,INTERVAL_MATRIX & Acond,INTERVAL_VECTOR
    & v_IS))
    
    where This procedure returns 1 if all matrices are regular, -1 if a singular matrix has been found, -2 if the algorithm has failed and -3 if the regularity cannot be ascertained as for a scalar input X the sign of the determinant cannot be ascertained.

    The flag Simp_In_Cond may be used to design the simplification procedure. It is set to 0 when the simplification procedure is used to check the initial matrix $A$, to -1 or -2 when testing the regularity of the conditioning matrix, to 1 when testing the regularity of AK and to 2 for KA.

    This procedure may also be used with the following syntaxes:

     
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
        int (* A_Cond)(int dimA,INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),INTERVAL_VECTOR & v_IS,INTERVAL_MATRIX &A),
        INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
        INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
        INTERVAL Determinant_Matrix(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
        INTERVAL Determinant_A_Cond_Left(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
        INTERVAL Determinant_A_Cond_Right(INTERVAL_MATRIX &A,INTERVAL_VECTOR &X),
        int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain)
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
        int (* A_Cond)(int dimA,INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),INTERVAL_VECTOR & v_IS,INTERVAL_MATRIX &A),
        INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
        INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
        int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain,
        int (* Simp)(int dimA,INTERVAL_MATRIX & Acond,INTERVAL_VECTOR
    & v_IS))
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
            int Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain)
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
            int Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain,
        int (* Simp)(int dimA,INTERVAL_MATRIX & Acond,INTERVAL_VECTOR
    &v_IS))
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
        int (* A_Cond)(int dimA,INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),INTERVAL_VECTOR & v_IS,INTERVAL_MATRIX &A),
        INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
        INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
        int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain)
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
        INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
        INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
        int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain)
    
    int Matrix_Is_Regular(int dimA,
        INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
        INTERVAL_MATRIX (* A_Cond_Left)(INTERVAL_VECTOR & v_IS_Left),
        INTERVAL_MATRIX (* A_Cond_Right)(INTERVAL_VECTOR & v_IS_Right),
        int Type_Cond,
            INTEGER_VECTOR &Type_Determinant,
        int Iteration,
        INTERVAL_VECTOR &Domain,
        int (* Simp)(int dimA,INTERVAL_MATRIX & Acond,INTERVAL_VECTOR
    &v_IS))
    
    Note that the 3B method is implemented as a built-in procedure called ALIAS_3B_Regular_Matrix, but is usually not very efficient.


    Rohn simplification procedure

    Mathematical background

    Let $H$ be the set of vector with components either -1 or 1 and the scalar matrices ${\bf A^{uv}}$, ${\bf u},{\bf v}\in H$,the set of matrices whose elements are

    \begin{displaymath}
\vspace*{-0.1cm}
A^{{\bf u}{\bf v}}_{ij}=
\overline{a_{ij}}...
...= -1,
\underline{a_{ij}} {\rm if} u_i.v_j= 1
\vspace*{-0.1cm}
\end{displaymath}

    The set of matrices ${\bf A^{uv}}$ has $2^{2n-1}$ elements. If all matrices in the set have a determinant of the same sign, then all the matrices A are regular [22].

    Note than another simplification procedure may be built by using the spectral radius (see section 7.6).

    Implementation

    Rohn test is implemented as:
     
    int Rohn_Consistency(int dimA,INTERVAL_MATRIX &A)
    
    the procedure returning -1 if all the matrices in A are regular, 0 otherwise.

    To speed up the procedure a remembering mechanism has been implemented. If the flag ALIAS_Rohn_Remembering is set to 1 the procedure will store in the array ALIAS_Rohn_Matrix a set of at most ALIAS_Rohn_Remembering_Step matrices for which the Rohn test has already been done, together with at which step the test has failed (the calculation of the determinant of the scalar matrices is always done in the same order). This may allow to avoid a large number of determinant calculation.

    Regularity of matrix with linear elements

    Mathematical background

    Let assume that a parametric matrix has rows (or columns) in which some elements depend linearly on some of the unknowns. Let consider the set $H$ of interval matrices obtained as follows: It can be then shown that if all the matrices in $H$ have their determinants of the same sign, then the set of parametric matrices does not include any singular matrix. This test is more powerful than the test proposed by Rohn (see section 7.4.2) as it takes more into account the structure of the parametric matrix.

    Implementation

    A regularity test based on this approach is implemented as:

     
    int ALIAS_Check_Regularity_Linear_Matrix(int DimA,
       INTERVAL_VECTOR (* Func)(int l1, int l2,INTERVAL_VECTOR & v_IS),
       int (* A_Cond)(int dimA,
          INTERVAL_VECTOR (* Func1)(int l1, int l2,INTERVAL_VECTOR &v_IS),
          INTERVAL_VECTOR & v_IS,INTERVAL_MATRIX &A),
       int Row_Or_Column,
       int Context,
       INTEGER_MATRIX &Implication_Var,
       int Use_Rohn,
       INTERVAL_VECTOR &Domain)
    
    where This procedure return -1 if all elements of $A$ are regular, 0 otherwise

    Characteristic polynomial

    It is possible to calculate the coefficients of the characteristic polynomial of a real or interval matrix using the procedures:

     
    INTERVAL_VECTOR Poly_Characteristic(INTERVAL_MATRIX &A)
    INTERVAL_VECTOR Poly_Characteristic(MATRIX &A)
    

    In both cases the coefficients are returned as an interval vector which contains the coefficients of the polynomial by increasing order.


    Spectral radius

    Safe calculation of the spectral radius of a square real or interval matrix may be obtained with the procedures

     
    int Spectral_Radius(INTERVAL_MATRIX &AA,double eps,double *ro,int iter)
    int Spectral_Radius(MATRIX &AA,double eps,double *ro,int inter)
    
    where These procedures return 1 on success, 0 on failure (in which case increasing eps may be a good option) and -1 if the matrix is not square. The procedures
     
    int Spectral_Radius(INTERVAL_MATRIX &AA,double eps,double *ro)
    int Spectral_Radius(MATRIX &AA,double eps,double *ro)
    
    may also be used with a maximum number of iteration fixed to 100.

    If it intended just to show that the spectral radius is larger than a given value seuil then you may use

     
    int Spectral_Radius(INTERVAL_MATRIX &A,double eps,double *ro,double seuil);
    int Spectral_Radius(MATRIX &A,double eps,double *ro,double seuil);
    int Spectral_Radius(INTERVAL_MATRIX &A,double eps,double *ro,int iter,double seuil);
    int Spectral_Radius(MATRIX &A,double eps,double *ro,int iter,double seuil);
    
    Note that the calculation of the spectral radius may be used to check the regularity of an interval matrix. Indeed let ${\bf A}$ be an interval matrix of dimension $n$, $I_n$ the identity matrix of dimension $n$. If ${\bf A_c}$ is the mid-matrix of ${\bf A}$ we may write

    \begin{displaymath}
{\bf A}=[{\bf A_c}-\Delta,{\bf A_c}+\Delta]
\end{displaymath}

    Let ${\bf R}$ be an arbitrary matrix. It can be shown that if

    \begin{displaymath}
\rho(\vert I_n-{\bf R}{\bf A_c}\vert+\vert{\bf R}\vert\Delta)<1
\end{displaymath}

    where $\rho$ denotes the spectral radius, then ${\bf A}$ is regular [22]: this is known as the Beeck-Ris test.


    next up previous contents Next: Optimization Up: ALIAS-C++ Previous: Parametric polynomials and eigenvalues
  • J-P. Merlet home page
  • La page Présentation de HEPHAISTOS
  • HEPHAISTOS home page
  • La page "Présentation" de l'INRIA
  • INRIA home page

    jean-pierre merlet
    2018-07-25