Applications of the Quillen-Suslin theorem to multidimensional systems theory

French title: Applications du théorème de Quillen-Suslin à la théorie des systèmes multidimensionnels

Author: Anna Fabiańska(note: ) and Alban Quadrat(note: )

Location: Sophia Antipolis

Inria Research Theme: THnum

Inria Research Report Number: 6126

Team: APICS

Date: February 2007

Keywords: Constructive versions of the Quillen-Suslin theorem, Lin-Bose´s conjecture, multidimensional linear systems, flat systems, (weakly) doubly coprime factorizations of rational transfer matrices, factorization and decomposition of linear functional systems, symbolic computation..

French keywords: Versions constructives du théorème de Quillen-Suslin, conjecture de Lin-Bose, systèmes linéaires multidimensionnels, systèmes plats, factorisations doublement (faiblement) copremières de matrices de transfert rationnelles, factorisation et décomposition des systèmes linéaires fonctionnels, calcul formel..

Abstract

The purpose of this paper is to give four new applications of the Quillen-Suslin theorem to mathematical systems theory. Using a constructive version of the Quillen-Suslin theorem, also known as Serre´s conjecture, we show how to effectively compute flat outputs and injective parametrizations of flat multidimensional linear systems. We prove that a flat multidimensional linear system is algebraically equivalent to the controllable 1-D dimensional linear systems obtained by setting all but one functional operator to zero in the polynomial matrix defining the system. In particular, we show that a flat ordinary differential time-delay linear system is algebraically equivalent to the corresponding ordinary differential system without delay, i.e., the controllable ordinary differential linear system obtained by setting all the delay amplitudes to zero. We also give a constructive proof of a generalization of Serre´s conjecture known as Lin-Bose´s conjecture. Moreover, we show how to constructively compute (weakly) left-/right-/doubly coprime factorizations of rational transfer matrices over a commutative polynomial ring. The Quillen-Suslin theorem also plays a central part in the so-called decomposition problem of linear functional systems studied in the literature of symbolic computation. In particular, we show how the basis computation of certain free modules, coming from projectors of the endomorphism ring of the module associated with the system, allows us to obtain unimodular matrices which transform the system matrix into an equivalent block-triangular or a block-diagonal form. Finally, we demonstrate the package QuillenSuslin which, to our knowledge, contains the first implementation of the Quillen-Suslin theorem in a computer algebra system as well as the different algorithms developed in the paper.

French Abstract

Le but de ce papier est de donner quatre nouvelles applications du théorème de Quillen-Suslin à la théorie mathématique des systèmes. A l´aide d´une version constructive du théorème de Quillen-Suslin, aussi connu sous le nom de conjecture de Serre, nous montrons comment calculer de manière effective les sorties plates et les paramétrisations injectives des systèmes linéaires multidimensionnels plats. Nous prouvons que tout système linéaire multidimensionnel plat est algébriquement équivalent aux systèmes linéaires 1-D contrôlables obtenus par annulation de tous les opérateurs fonctionnels sauf un dans la matrice polynômiale définissant le système. En particulier, nous montrons que tout système linéaire différentiel à retard plat est algébriquement équivalent au système différentiel sans retard, c´est-à-dire, au système linéaire contrôlable d´équations différentielles obtenu en annulant les amplitudes des retards. Nous donnons aussi une preuve constructive d´une généralisation de la conjecture de Serre appelée conjecture de Lin-Bose. De plus, nous montrons comment calculer de manière effective des factorisations (faiblement) copremières à gauche et à droite de matrices de transfert rationnelles sur une algèbre commutative de polynômes. Le théorème de Quillen-Suslin joue aussi un rôle important dans l´étude du problème de décomposition des systèmes linéaires fonctionnels étudié dans la littérature du calcul formel. En particulier, nous montrons comment le calcul de bases de certains modules libres, provenant de projecteurs de l´anneau des endomorphismes du module associé au système, nous permet de calculer des matrices unimodulaires qui transforment la matrice du système en une matrice équivalente ayant une forme bloc-triangulaire ou bloc-diagonale. Finalement, nous décrivons le logiciel QuillenSuslin qui, à notre connaissance, contient la première implémentation du théorème de Quillen-Suslin dans un système de calcul formel, ainsi que les différents algorithmes obtenus dans le papier.

Note: The first author is very grateful to W. Plesken for his support. She would also like to express thanks to F. Castro-Jimenez and J. Gago-Vargas for the opportunity to discuss with them on the Quillen-Suslin theorem and to A. van den Essen for pointing out to her an interesting example that has been included in the paper..

Note: The second author would like to thank Y. Lam, T. Coquand, H. Lombardi and I. Yengui for interesting discussions on the Quillen-Suslin theorem and Z. Lin and H. Park for discussions on applications of the Quillen-Suslin theorem to mathematical systems theory, control theory and signal processing..


Short Table of Contents


1. Introduction
2. A module-theoretic approach to systems theory
3. The Quillen-Suslin theorem
4. Flat multidimensional linear systems
5. Pommaret´s theorem of Lin-Bose´s conjecture
6. Computation of (weakly) doubly coprime factorizations of rational transfer matrices
7. Decomposition of multidimensional linear systems
8. Conclusion
9. Appendix: QuillenSuslin, a package for computing bases of free modules over commutative polynomial rings

1. Introduction

In 1784, Monge studied the integration of certain underdetermined non-linear systems of ordinary differential equations, namely, systems containing more unknown functions than differential independent equations ([31]). He showed how the solutions of these systems could be parametrized by means of a certain number of arbitrary functions of the independent variable. This problem was called the Monge problem and it was studied by famous mathematicians such as Hadamard, Hilbert, Cartan and Goursat. In particular, motivated by problems coming from linear elasticity theory, Hadamard considered the case of linear ordinary differential equations and Goursat investigated underdetermined systems of partial differential equations. We refer the reader to [31] for a historical account on the Monge problem and for the main references.

Within the algebraic analysis approach ([2], [21], [30], [35]), the Monge problem was recently studied for underdetermined systems of linear partial differential equations in [21], [35], [44], [45], [46] and for linear functional systems in [5], [6] (e.g., differential time-delay systems, discrete systems). Depending on the algebraic properties of a certain module M defined over a ring D of functional operators and intrinsically associated with the linear functional system, we can prove or disprove the existence of different kinds of parametrizations of the system (i.e., minimal or injective parametrizations, non-minimal parametrizations, chains of successive parametrizations). Constructive algorithms for checking these algebraic properties (i.e., torsion, existence of torsion elements, torsion-free, reflexive, projective, stably free, free) and computing the different parametrizations were recently developed in [5], [44], [45], [46], implemented in the package OreModules ([5], [6]) and illustrated on numerous examples coming from mathematical physics and control theory ([5], [6]). Finally, we proved in [5], [44], [45], [46] how the Monge problem gave answers for the search of potentials in mathematical physics and image representations in control theory ([41], [42], [65], [66]).

The last results show that the Monge problem is constructively solved for certain classes of linear functional systems up to a last but important point: we can check whether or not a linear functional system admits injective parametrizations but we are generally not able to compute one even if some heuristic methods were presented in [5], [44], [45]. Indeed, the existence of injective parametrizations for a linear functional system was proved to be equivalent to the freeness of the corresponding module M. In the case of a linear functional system with constant coefficients, the corresponding ring D of functional operators is a commutative polynomial ring over a field k of constants. Using the famous Quillen-Suslin theorem ([56], [58]), also known as Serre´s conjecture ([24], [25]), we then know that free D-modules are projective ones. Using Gröbner or Janet bases ([5], [11], [44]), we can check whether or not a module over a commutative polynomial ring is projective. See [3], [11], [20] and the references therein for introductions to Janet and Gröbner bases. Hence, we can constructively prove the existence of an injective parametrization for a linear functional system. However, we need to use a constructive version of the Quillen-Suslin theorem ([15], [19], [23], [27], [29], [37], [61], [62]) to get injective parametrizations of the corresponding system.

The main purpose of this paper is to recall a general algorithm for computing bases of a free module over a commutative polynomial ring, give four new applications of the Quillen-Suslin theorem to mathematical systems theory and demonstrate the implementation of the QuillenSuslin package ([13]) developed in the computer algebra system MAPLE. To our knowledge, the QuillenSuslin package is the first package available which performs basis computation of free modules over a commutative polynomial ring with rational and integer coefficients and is dedicated to different applications coming from mathematical systems theory.

More precisely, the plan of the paper is the following one. In the second section, we recall how the structural properties of linear functional systems can be constructively studied within the algebraic analysis approach as well as different results on the Monge problem. A constructive version of the Quillen-Suslin theorem, which is the main tool we use in the paper, is presented in the third section and the implementation is illustrated on many examples in the Appendix of the paper. We also describe some heuristic methods that highly simplify the computation of a basis of a free module over polynomial ring in certain special cases. The constructive version of the Quillen-Suslin theorem and, in particular the patching procedure, gives us the opportunity to make a new observation concerning linear functional systems which admit injective parametrizations also called flat multidimensional systems in mathematical systems theory. In the fourth section, we prove that a flat multidimensional system is algebraically equivalent to a 1-D flat linear system obtained by setting all but one functional operator to zero in the system matrix. This result gives an answer to a natural question on flat multidimensional systems. In particular, we prove that every flat differential time-delay system is algebraically equivalent to the differential system without delays, namely, the system obtained by setting to zero all the time-delay amplitudes. In the fifth section, we consider a generalization of Serre´s conjecture. We recall that Serre´s conjecture conjecture, also known as the Quillen-Suslin theorem, can be expressed in the language of matrices as follows: every matrix R over a commutative polynomial ring D=k[x 1 ,...,x n ] whose maximal minors generate D (unimodular matrix) can be completed to a square invertible matrix over D (i.e., its determinant is a non-zero element of the field k). The generalization, stated by Lin and Bose in [26] and first proved by Pommaret in [43] by means of algebraic analysis, can be formulated as the possibility of completing a matrix R whose maximal minors divided by their greatest common divisor d generate D to a square polynomial matrix whose determinant equals d. Serre´s conjecture is then the special case where d=1. Using the Quillen-Suslin theorem, we give a constructive algorithm for computing such a completion. Using the possibility of computing basis of a free module in our implementation QuillenSuslin, this algorithm has been implemented in this package. In the sixth section, we study the existence of (weakly) left-/right-coprime factorizations of rational transfer matrices using recent results developed in [50]. We give algorithms for computing such factorizations using the constructive version of the Quillen-Suslin theorem. These results constructively solve open questions in the literature of multidimensional linear systems (see [63], [64] and the references therein). Finally, we show that the constructive Quillen-Suslin theorem also plays an important role in the decomposition problem of linear functional systems studied in the literature of symbolic computation. See [9] and the references therein for more details. The main idea is to transform the system matrix into an equivalent block-triangular or a block-diagonal form ([9], [10]).

The different algorithms presented in the paper have been implemented in the package QuillenSuslin based on the library Involutive ([3]) (an OreModules ([6]) version will be soon available). The Appendix illustrates the main procedures of the QuillenSuslin package on different examples taken from the literature ([19], [23], [38], [61]). The package QuillenSuslin also contains a completion algorithm for unimodular matrices over Laurent polynomial rings described in [36], [38]. See also [1] for a recent algorithm. In [38], Park explains the importance and the meaning of the completion problem of unimodular matrices over Laurent polynomial rings to signal processing and gives an algorithm for translating this problem to a polynomial case. Park´s results can also be used for computing flat outputs of δ-flat multidimensional linear systems ([32], [33]). See [5] for another constructive algorithm and [6] for illustrations on different explicit examples.

1.1. Notation.

In what follows, we shall denote by k a field, D=k[x 1 ,...,x n ] a commutative polynomial ring with coefficients in k, D 1×p the D-module formed by the row vectors of length p with entries in D and D q×p the set of q×p-matrices with entries in D. F will always denote a D-module. We denote by R T the transpose of the matrix R and by I p the p×p identity matrix. Finally, the symbol ≜ means ``by definition´´.

2. A module-theoretic approach to systems theory

Let D=k[x 1 ,...,x n ] be a commutative polynomial ring over a field k and RD q×p . We recall that a matrix R is said to have full row rank if the first syzygy module of the D-module D 1×q R formed by the D-linear combinations of the rows of R, namely,

ker D (.R){λD 1×p |λR=0},

is reduced to 0. In other words, λR=0 implies λ=0, i.e., the rows of R are D-linearly independent.

The following definitions of primeness are classical in systems theory.

Definition 1 ([34], [62], [66])

Let D=R[x 1 ,...,x n ] be a commutative polynomial ring, RD q×p a full row rank matrix, J the ideal generated by the q×q minors of R and V(J) the algebraic variety defined by:

V(J)={ξC n |P(ξ)=0,PJ}.

  1. R is called minor left-prime if dim C V(J)n2, i.e., the greatest common divisor of the q×q minors of R is 1.

  2. R is called weakly zero left-prime if dim C V(J)0, i.e., the q×q minors of R may only vanish simultaneously in a finite number of points of C n .

  3. R is called zero left-prime if dim C V(J)=1, i.e., the q×q minors of R do not vanish simultaneously in C n .

The previous classification plays an important role in multidimensional systems theory. See [34], [62], [66] and the references therein for more details.

The purpose of this section is twofold. We first recall how we can generalize the previous classification for general multidimensional linear systems, i.e., systems which are not necessarily defined by full row rank matrices. We also explain the duality existing between the behavioural approach to multidimensional systems ([34], [41], [65], [66]) and the module-theoretic one ([44], [45], [46]). See also [65] for a nice introduction.

In what follows, D will denote a commutative polynomial ring with coefficients in a field k. In particular, we shall be interested in commutative polynomial rings of functional operators such as partial differential operators, differential time-delay operators or shift operators. Let us consider a matrix RD q×p and a D-module F, namely:

f 1 ,f 2 F,a 1 ,a 2 D:a 1 f 1 +a 2 f 2 F.

If we define the following D-morphism, namely, D-linear map,

.R:D 1×q .RD 1×p ,λ=(λ 1 ...λ q )(.R)(λ)=λR,

where D 1×p denotes the D-module of row vectors of length p with entries in D, then the cokernel of the D-morphism .R is defined by:

M=D 1×p /(D 1×q R).

The D-module M is said to be presented by R or simply finitely presented ([5], [57]). Moreover, we can also define the system or behaviour as follows:

ker F (R.){ηF p |Rη=0}.

As it was noticed by Malgrange in [30], the D-module M and the system ker F (R.) are closely related. As this relation will play an important role in what follows, we shall explain it in details. In order to do that, let us first introduce a few classical definitions of homological algebra. We refer the reader to [57] for more details.

Definition 2

  1. A sequence (M i ,d i :M i M i1 ) iZ of D-modules M i and D-morphisms d i :M i M i1 is a complex if we have:

    iZ, im d i kerd i1 .

    We denote the previous complex by:

    ... d i+2 M i+1 d i+1 M i d i M i1 d i1 ...(1)

  2. The defect of exactness of the complex (1) at M i is defined by:

    H(M i )=kerd i / im d i+1 .

  3. The complex (1) is said to be exact at M i if we have:

    H(M i )=0kerd i = im d i+1 .

  4. The complex (1) is exact if:

    iZ,kerd i = im d i+1 .

  5. The complex (1) said to be a split exact sequence if (1) is exact and if there exist D-morphisms s i :M i1 M i satisfying the following conditions:

    iZ,s i+1 s i =0,s i d i +d i+1 s i+1 =id M i .

  6. A finite free resolution of a D-module M is an exact sequence of the form

    0D 1×p m .R m ... .R 2 D 1×p 1 .R 1 D 1×p 0 πM0.(2)

    where p i Z + ={0,1,2,...}, R i D p i ×p i1 , and the D-morphism .R i is defined by:

    .R i :D 1×p i D 1×p i1 λ(.R i )(λ)=λR i .

The next classical result of homological algebra will play a crucial role in what follows.

Theorem 1 ([57])

Let F be a D-module, M a D-module and (2) a finite free resolution of M. Then, the defects of exactness of the following complex

... R 3 .F p 2 R 2 .F p 1 R 1 .F p 0 0,(3)

where the D-morphism R i .:F p i1 F p i is defined by

ηF p i1 ,(R i .)(η)=R i η,

only depend on M and F. Up to an isomorphism, the defects of exactness are denoted by:

ext D 0 (M,F)ker F (R 1 .), ext D i (M,F)ker F (R i+1 .)/(R i F p i ),i1.

Finally, we have ext D 0 (M,F)= hom D (M,F), where hom D (M,F) denotes the D-module of D-morphisms from M to F.

We refer the reader to Example 13 for explicit computations of ext D i (N,D), i0.

Coming back to the D-module M, we have the following beginning of a finite free resolution of M:

D 1×q .RD 1×p πM0,λλR(4)

where π denotes the D-morphism which sends elements of D 1×p to their residue classes in M. If we ``apply the left-exact contravariant functor´´ hom D (·,F) to (4) (see [57] for more details), by Theorem 1, we obtain the following exact sequence:

F q R.F p hom D (M,F)0.Rηη

This implies the following important isomorphism ([30]):

ker F (R.)={ηF p |Rη=0} hom D (M,F).(5)

For more details, see [5], [30], [34], [46], [65] and the references therein. In particular, (5) gives an intrinsic characterization of the F-solutions of the system ker F (R.). It only depends on two mathematical objects:

  1. The finitely presented D-module M which algebraically represents the linear functional system.

  2. The D-module F which represents the ``functional space´´ where we seek the solutions of the system.

If D is now a ring of functional operators (e.g., differential operators, time-delay operators, difference operators), then the issue of understanding which F is suitable for a particular linear system has long been studied in functional analysis and is still nowadays a very active subject of research. It does not seem that constructive algebra and symbolic computation can propose new methods to handle this functional analysis problem. However, they are very useful for classifying hom D (M,F) by means of the algebraic properties of the D-module M. Indeed, a large classification of the properties of modules is developed in module theory and homological algebra. See [57] for more information. Let us recall a few of them.

Definition 3 ([57])

Let D be a commutative polynomial ring with coefficients in a field k and M a finitely presented D-module. Then, we have:

  1. M is said to be free if it is isomorphic to D 1×r for a non-negative integer r, i.e.:

    MD 1×r ,rZ + ={0,1,2...}.

  2. M is said to be stably free if there exist two non-negative integers r and s such that:

    MD 1×s D 1×r .

  3. M is said to be projective if there exist a D-module P and non-negative integer r such that:

    MPD 1×r .

  4. M is said to be reflexive if the canonical map

    ε M :M hom D ( hom D (M,D),D),

    defined by

    mM,f hom D (M,D):ε M (m)(f)=f(m),

    is an isomorphism, where hom D (M,D) denotes the D-module of D-morphisms from M to D.

  5. M is said to be torsion-free if the submodule of M defined by

    t(M)={mM0PD:Pm=0}

    is reduced to the zero module. t(M) is called the torsion submodule of M and the elements of t(M) are the torsion elements of M.

  6. M is said to be torsion if t(M)=M, i.e., every element of M is a torsion element.

Let K=Q(D)=k(x 1 ,...,x n ) be the quotient field of D ([57]) and M a finitely presented D-module. We call the rank of M over D, denoted by rank D (M), the dimension of the K-vector space K D M obtained by extending the scalars of M from D to K, i.e.:

rank D (M)= dim K (K D M).

We can check that if M is a torsion D-module, we then have K D M=0, a fact which implies that rank D (M)=0. See [57] for more details.

Let us recall a few results about the notions previously introduced in Definition 3.

Theorem 2 ([57])

Let D=k[x 1 ,...,x n ] be a commutative polynomial ring with coefficients in a field k. We have the following results:

  1. We have the implications among the previous concepts:

    freestablyfreeprojectivereflexivetorsion-free.
  2. If D=k[x 1 ], then D is a principal ideal domain – namely, every ideal of D is principal, i.e., it can be generated by one element of D – and every finitely generated torsion-free D-module is free.

  3. (Serre theorem [11]) Every projective module over D is stably free.

  4. (Quillen-Suslin theorem [56], [58]) Every projective module over D is free.

The famous Quillen-Suslin theorem will play an important role in what follows. We refer to [24], [25] for the best introductions nowadays available on this subject.

The next theorem gives some characterizations of the definitions given in Definition 3.

Theorem 3 ([5], [35], [46])

Let D=k[x 1 ,...,x n ] be a commutative polynomial ring over a field k, RD q×p and the finitely presented D-modules:

M=D 1×p /(D 1×q R),N=D 1×q /(D 1×p R T ).

We then have the equivalences between the first two columns of Figure 1.

Combining the results of Theorem 3 and the Quillen-Suslin theorem (see 4 of Theorem 2), we then obtain a way to check whether or not a finitely presented D-module M has some torsion elements or is torsion-free, reflexive, projective, stably free or free. We point out that the explicit computation of ext D i (N,D) can always be done using Gröbner or Janet bases. See [5], [44], [45] for more details and for the description of the corresponding algorithms. We also refer the reader to [4], [6] for the library OreModules in which the different algorithms were implemented as well as to the large library of examples of OreModules which illustrates them. Finally, see also [3], [11], [20] and the references therein for an introduction to Gröbner and Janet bases.

Remark 1

The D-module

N=D 1×q /(D 1×p R T )

is called the transposed module of M=D 1×p /(D 1×q R) even if N depends on M only up to a projective equivalence ([47]), namely, if M=D 1×r /(D 1×s R ) and N =D 1×s /(D 1×r R T ), then there exist two projective D-modules P and P such that NPN P ([57]). However, for every D-module F, we have ext D i (NP,F) ext D i (N,F) ext D i (P,F) and, for i1, ext D i (P,F)=0 as P is a projective D-module ([57]). Hence, we then get ext D i (N,F) ext D i (N ,F), for i1. Hence, the results of Theorem 3 do not depend on the choice of a presentation of M, i.e., on R. In what follows, we shall sometimes denote N by T(M).

In order to explain why the definitions given in Definition 3 extend the concepts of primeness defined in Definition 1, we first need to introduce some more definitions.

Definition 4 ([2])

  1. If M is a non-zero finitely presented D-module, then the grade j D (M) of M is defined by:

    j D (M)=min{i0| ext D i (M,D)0}.

  2. If M is a non-zero finitely presented D-module, the dimension dim D (M) of M is defined by

    dim D (M)= Kdim (D/ ann D (M)),

    where Kdim denotes the Krull dimension ([57]) and:

    ann D (M)={aD|aM=0}, ann D (M)={aD|lZ + :a l M=0}.

We are now in position to state an important result.

Theorem 4 ([2], [35])

If M is a non-zero finitely presented D=k[x 1 ,...,x n ]-module, where k is a field containing Q, we then have:

j D (M)+ dim D (M)=n.

Let us suppose that R has full row rank and let us consider the finitely presented D-module M=D 1×p /(D 1×q R). Using the notations of Definition 1 and the fact that

dim D (N)= dim C V(J),

where N=T(M)=D 1×q /(D 1×p R T ) is then a torsion D-module, i.e., it satisfies ext D 0 (M,D)= hom D (M,D)=0, by Theorem 4, we then obtain:

j D (N)=n dim C V(J)1.

Hence, by Theorems 3 and 4, we obtain that R is minor left-prime (resp., zero left-prime) iff the D-module M is torsion-free (resp., projective, i.e., free by the Quillen-Suslin theorem stated in 4 of Theorem 2). See [46] for more details and the extension of these results to the case of non-commutative rings of differential operators.

We finally obtain the table given in Figure 1 which sums up the different results previously obtained. We note that the last two columns of this table only hold when the matrix R has full row rank.

Module M ext D i (N,D) dim D (N) Primeness
With torsion t(M) ext D 1 (N,D) n1
Torsion-free ext D 1 (N,D)=0 n2 Minor left-prime
Reflexive ext D i (N,D)=0, n3
i=1,2
... ... ... ...
... ext D i (N,D)=0, 0 Weakly zero
1in1 left-prime
Projective ext D i (N,D)=0, -1 Zero left-prime
1in

To finish, we explain what the system interpretations of the definitions given in Definition 3 are. In particular, these interpretations solve the Monge problem stated in the introduction of the paper. In order to do that, we also need to introduce a few more definitions.

Definition 5 ([57])

  1. A D-module F is called injective if, for every D-module M, and, for all i1, we have ext D i (M,F)=0.

  2. A D-module F is called cogenerator if, for every D-module M, we have:

    hom D (M,F)=0M=0.

Roughly speaking, an injective cogenerator is a space rich enough for seeking solutions of linear systems of the form Ry=0, where RD q×p is any matrix and yF p . In particular, using (5), if F is a cogenerator D-module and M0, then hom D (M,F)0, meaning that the corresponding system ker F (R.) is not empty. Finally, if F is an injective cogenerator D-module, then we can prove that any complex of the form (3) is exact at F p i , i1, if and only if the corresponding complex (2) is exact. See [34], [41], [65] and the references therein for more details.

The following result proves that there always exists an injective cogenerator.

Theorem 5 ([57])

An injective cogenerator D-module F exists for every ring D.

Let us give important examples of injective cogenerator modules.

Example 1

If Ω is an open convex subset of R n , then the space C (Ω) (resp., D (Ω)) of smooth real functions (resp., real distributions) on Ω is an injective cogenerator module over the ring R[ 1 ,..., n ] of differential operators with coefficients in R, where we have denoted by i =/x i ([34], [30], [41]).

Example 2

Let k be a field, F=k Z + n be the set of sequences with values in k and D=k[x 1 ,...,x n ] be the ring of shift operators, namely,

fF,i=1,...,n,(x i f)(μ)=f(μ+1 i ),

where μ=(μ 1 ,...,μ n )Z + n and μ+1 i =(μ 1 ,...,μ i1 ,μ i +1,μ i+1 ,...,μ n ). Then, F is an injective D-module ([34], [65]).

We have the following important corollary of Theorem 3 which solves the Monge problem in the case of linear functional systems with constant coefficients. See [67] and the references therein and the introduction of the paper.

Corollary 1 ([5], [44])

Let F be an injective cogenerator D=k[x 1 ,...,x n ]-module, RD q×p and M=D 1×p /(D 1×q R). Then, we have the following results:

  1. There exists Q 1 D q 1 ×q 2 , where p=q 1 , such that we have the exact sequence

    F q R.F q 1 Q 1 .F q 2 ,

    i.e., ker F (R.)=Q 1 F q 2 , iff the D-module M is torsion-free.

  2. There exist Q 1 D q 1 ×q 2 and Q 2 D q 2 ×q 3 such that we have the exact sequence

    F q R.F q 1 Q 1 .F q 2 Q 2 .F q 3 ,

    i.e., ker F (R.)=Q 1 F q 2 and ker F (Q 1 .)=Q 2 F q 3 , iff the D-module M is reflexive.

  3. There exists a chain of n successive parametrizations, namely, for i=1,...,n, there exist Q i D q i ×q i+1 such that we have the following exact sequence

    F q R.F q 1 Q 1 .... Q n1 .F q n Q n .F q n+1 ,

    i.e., ker F (R.)=Q 1 F q 2 and ker F (Q i .)=Q i+1 F q i+1 , i=1,...,n1, iff the D-module M is projective.

  4. There exist QD p×m and TD m×p such that TQ=I m and the sequence

    F q R.F p Q.F m 0,(6)

    is exact, i.e., ker F (R.)=QF m , and iff the D-module M is free.

We refer the reader to [5], [44], [45], [46], [53], [54] for the solutions of the Monge problem for different classes of linear functional systems with variables coefficients such as partial differential, differential time-delay or difference equations.

The matrices Q i defined in Corollary 1 are called parametrizations ([5], [44], [45], [46]). Indeed, from 1 of Corollary 1, if M is torsion-free, then there exists a matrix of operators Q 1 D q 1 ×q 2 which satisfies ker F (R.)=Q 1 F q 2 . This means that any solution ηF p satisfying Rη=0 is of the form η=Q 1 ξ for a certain ξF q 2 . In the behaviour approach ([42]), the parametrization is called an image representation of ker F (R.) ([41], [65], [66]). We point out that the parametrizations Q i are obtained by computing ext D i (N,D) (see Theorem 3). Hence, checking whether or not a D-module is torsion-free, reflexive or projective gives the corresponding successive parametrizations. We refer to [5], [44], [45], [46] for more details, the extension of the previous results to non-commutative algebras of functional operators and the implementation of the corresponding algorithms in the library OreModules. Finally, the matrix Q defined in 4 of Corollary 1 is called an injective parametrization of ker F (R.) as every F-solution of ker F (R.) has the form η=Qξ for a certain ξF m and we have

ξ=(TQ)ξ=Tη,

i.e., ξ is uniquely defined by ηker F (R.). At this stage, it is important to point out that no general algorithm has been developed to get injective parametrizations when the D-module M is free. It is the main purpose of this paper to constructively study this question and to apply the computation of injective parametrizations to some open questions appearing in mathematical systems theory.

Finally, we point out that, if M is a free D-module, then there always exist QD p×m and TD m×p such that, for every D-module F, we have the exact sequence (6). Indeed, let us recall two standard arguments of homological algebra.

Proposition 1 ([57])

  1. Let us consider the following short exact sequence:

    M fM gM 0.

    If M is a projective D-module, then the previous exact sequence splits (see 5 of Definition 2).

  2. Let F be a D-module. The functor hom D (·,F) transforms split exact sequences of D-modules into split exact sequences of D-modules.

By 1 of Proposition 1, we obtain that D 1×q .RD 1×p .QD 1×m 0 is a splitting exact sequence and applying the functor hom D (·,F) to it, by 2 of Proposition 1, we obtain the splitting exact sequence (6). Hence, the assumption that F is an injective cogenerator D-module is only important for the converse implication of 6 of Corollary 1.

Explicit examples of computation of parametrizations can be found in [5], [6], [44], [45], [46] as well in the OreModules large library of examples ([4]). We refer the reader to these references and to Section 4 for the computation of injective parametrizations. However, let us give a simple example in order to illustrate the previous results.

Example 3

Let us consider the ring D=Q[ 1 , 2 , 3 ] of differential operators with rational coefficients ( i =/x i ), the matrix R=( 1 2 3 ) defining the so-called divergent operator in R 3 and the finitely presented D-module M=D 1×3 /(DR). Let us check whether or not the D-module M has some torsion elements or is torsion-free, reflexive or projective, i.e., free by the Quillen-Suslin theorem. In order to do that, we define the D-module N=D/(D 1×3 R T ). A finite free resolution of N can easily be computed by means of Gröbner or Janet bases. We obtain the following exact sequence

0D .P 3 D 1×3 .P 2 D 1×3 .R T D σN0,

where σ denotes the canonical projection onto N and:

P 2 =0 3 2 3 0 1 2 1 0,P 3 =R.

We note that P 2 corresponds to the so-called curl operator whereas R T is the gradient operator. Then, the defects of exactness of the following complex

0D .P 3 T D 1×3 .P 2 T D 1×3 .RD0(7)

are defined by:

ext D 0 (N,D)ker D (.R), ext D 1 (N,D)ker D (.P 2 T )/(DR), ext D 2 (N,D)ker D (.P 3 T )/(D 1×3 P 2 T ), ext D 3 (N,D)D/(D 1×3 P 3 T ).

Using the fact that R has full row rank, we obtain that ext D 0 (N,D)ker D (.R)=0, which is equivalent to say that N is a torsion D-module. Now computing the syzygy modules ker D (.P 2 T ) and ker D (.P 3 T ) by means of Gröbner or Janet bases, we obtain that

ker D (.P 2 T )=DR,ker D (.P 3 T )=D 1×3 P 2 T ,

which shows that ext D 1 (N,D)= ext D 2 (N,D)=0. Finally, we can easily check that 1 does not belong to the ideal I=D 1 +D 2 +D 3 of D, and thus, we have:

ext D 3 (N,D)D/I0.

Using Theorem 3, we obtain that M is a reflexive but not a projective, i.e., not a free D-module. This last fact can also be checked as R has full row rank and the dimension dim D (N) is 0 as the corresponding system is defined by the gradient operator, namely,

1 y=0, 2 y=0, 3 y=0,

whose solution is a constant, i.e., the solution of the system only depends on ``a function of zero independent variables´´. Hence, by Theorem 4, we obtain that j D (N)=3, meaning that the first non-zero ext D i (N,D) has index 3. By Theorem 3, we then get that M is a reflexive D-module but not a projective one.

Finally, if we consider the D-module F=C (Ω), where Ω is an open convex subset of R 3 , using Example 1, we obtain that F is an injective cogenerator D-module. Hence, if we apply the functor hom D (·,F) to the complex (7), we then obtain the following exact sequence:

F P 3 T .F 3 P 2 T .F 3 R.F0.

We find again the classical results in mathematical physics that the smooth solutions on an open convex subset of R 3 of the divergence operator are parametrized by the curl operator and the solutions of the curl operator are parametrized by the gradient operator.

The only point let open is to constructively compute injective parametrizations of linear functional systems defining free modules over a commutative polynomial ring D. Indeed, checking the vanishing of the ext D i (N,D), we generally obtain a successive chain of n parametrizations but not an injective one. In the case of linear systems of partial differential equations with polynomial or rational coefficients, we have recently solved this problem in [53], [54], [55] using a constructive proof of a famous result in non-commutative algebra due to Stafford. However, the same technique cannot be used if we want an injective parametrization Q of ker F (R.) to have only constant coefficients. The main purpose of this paper is to solve this problem using a constructive proof of the Quillen-Suslin theorem and to show some applications of this result to mathematical systems theory.

3. The Quillen-Suslin theorem

Since Quillen and Suslin independently proved Serre´s conjecture stating that projective modules over commutative polynomial rings with coefficients in a field are free, some algorithmic versions of the proof have been proposed in the literature in order to constructively compute bases of free modules ([15], [19], [27], [29], [37], [59], [60], [61], [62]). We refer the interested reader to Lam´s nice books [24], [25] concerning Serre´s conjecture.

3.1. Projective and stably free modules

In module theory, it is well-known that a finitely presented D=k[x 1 ,...,x n ]-module (k is a field) M=D 1×p /(D 1×q R), where RD q×p , admits a finite free resolution. This is a result is due to Hilbert ([11]). Moreover, if k is a computable field, we can even construct a finite free resolution of M using Gröbner or Janet basis ([3], [11], [20]).

A classical result due to Serre proves that every projective D-module is stably free (a stably free module always being a projective D-module). See [11], [24], [25] for more details. In [53], [55], a constructive proof of this result was given and the corresponding algorithm was implemented in OreModules. Let us recall these useful results.

Proposition 2 ( [53], [55])

Let M be a D-module defined by the finite free resolution:

0D 1×p m .R m ... .R 2 D 1×p 1 .R 1 D 1×p 0 πM0.(8)
  1. If m3 and there exists S m D p m1 ×p m such that R m S m =I p m , then we have the following finite free resolution of M

    0D 1×p m1 .T m1 D 1×(p m2 +p m ) .T m2 D 1×p m3 .R m3 ... πM0,(9)

    with the following notations:

    T m1 =(R m1 S m )D p m1 ×(p m2 +p m ) ,T m2 =R m2 0D (p m2 +p m )×p m3 .
  2. If m=2 and there exists S 2 D p 1 ×p 2 such that R 2 S 2 =I p 2 , then we have the following finite free resolution of M

    0D 1×p 1 .T 1 D 1×(p 0 +p 2 ) τM0,(10)

    with the notations T 1 =(R 1 S 2 )D p 1 ×(p 0 +p 2 ) and:

    τ=π0:D 1×(p 0 +p 2 ) Mλ=(λ 1 λ 2 )τ(λ)=π(λ 1 ).

Remark 2

We note that Proposition 2 holds for every (commutative) ring D.

Let RD q×p and let us suppose that the D-module M=D 1×p /(D 1×q R) is projective (using the results summed up in Figure 1, we can constructively check this result). Using 1 of Proposition 1, we obtain that the exact sequence (8) splits (see 5 of Definition 2), and thus, there exists S m D p m1 ×p m such that R m S m =I p m . Repeating inductively the same method with the new finite free resolution of M, we can assume that we have the finite free resolution of M:

0D 1×p 3 .R 3 D 1×p 2 .R 2 D 1×p 1 .R 1 D 1×p 0 πM0.

As M is a projective D-module, by 1 of Proposition 1, the previous exact sequence splits and thus, there exists a matrix S 3 D p 2 ×p 3 satisfying R 3 S 3 =I p 3 . By 1 of Proposition 2, we then get the finite free resolution of M:

0D 1×p 2 .(R 2 S 3 )D 1×(p 1 +p 3 ) .(R 1 T 0 T ) T D 1×p 0 πM0.

Let us denote by T 1 =(R 1 T 0 T ) T . Again, as M is a projective D-module, by 1 of Proposition 1, the previous exact sequence splits and there exists S 2 D (p 1 +p 3 )×p 2 such that (R 2 S 3 )S 2 =I p 2 . Using 2 of Proposition 2, we obtain the following finite free presentation of the D-module M =D 1×(p 0 +p 2 ) /(D 1×(p 1 +p 3 ) (T 1 S 2 ))

0D 1×(p 1 +p 3 ) .(T 1 S 2 )D 1×(p 0 +p 2 ) π M 0,

where π denotes the standard projection on M and τ:M M is defined by τ(m)=π(λ 1 ), for all λ=(λ 1 λ 2 )D 1×(p 0 +p 2 ) satisfying m=π (λ). Moreover, 2 of Proposition 1 says that τ is an isomorphism, i.e., M M, a fact that can be also directly checked. We then obtain the following result.

Corollary 2

Let D=k[x 1 ,...,x n ] be a commutative polynomial ring over a field k and RD q×p . If the D-module M=D 1×p /(D 1×q R) is projective, then there exists a full row rank matrix R D q ×p such that:

MD 1×p /(D 1×q R ).(11)

We refer to Example 14 for an illustration of Corollary 2. See also [53], [54], [55].

We note that rank D (M)= rank D (M )=p q .

Finally, we have the following short exact sequence of the D-module M

0D 1×q .R D 1×p π M 0,

and using the fact that M M and M is a projective D-module, by 1 of Proposition 1, we obtain that the previous exact sequence splits and we then get ([5], [57])

M D 1×q D 1×p ,

which, by 2 of Definition 3, shows that MM is a stably free D-module.

Corollary 3 (Serre [11], [24], [25])

Every projective D=k[x 1 ,...,x n ]-module M is stably free.

3.2. Stably free and free modules

Let M be a stably free module over D=k[x 1 ,...,x n ], where k is a field. Using Corollary 2, we can always suppose that M has the form M=D 1×p /(D 1×q R), where RD q×p admits a right-inverse SD p×q . We note that R has then full row rank (λR=0λ=λRS=0). Let us characterize when M is a free D-module.

In order to do that, we first need to introduce a definition.

Definition 6

Let D be a ring. The general linear group GL p (D) is then defined by:

GL p (D)={UD p×p |VD p×p :UV=VU=I p }.

An element U GL p (D) is called a unimodular matrix.

In the case where D=k[x 1 ,...,x n ], we note that U GL p (D) iff the determinant detU of U is invertible in D, i.e., is a non-zero element of k. The following result holds for every (commutative) ring D.

Lemma 1

Let RD q×p be a matrix which admits a right-inverse over D. Then, the D-module M=D 1×p /(D 1×q R) is free if and only if there exists U GL p (D) such that RU=(I q 0).

Indeed, let us suppose that there exists U GL p (D) such that RU=(I q 0) and let us denote by J=(I q 0)D q×p . We easily check that D 1×p /(D 1×q J)=D 1×(pq) . Moreover, using the facts that RU=J and U GL p (D), we obtain the following commutative exact diagram

000D 1×q .RD 1×p πM0.U0D 1×q .JD 1×p κD 1×(pq) 0,00

which proves that MD 1×(pq) , i.e., M is a free D-module of rank pq.

Conversely, let us suppose that MD 1×(pq) . Combining the isomorphism ψ:MD 1×(pq) and the short exact sequence

0D 1×q .RD 1×p πM0,

we then obtain the following exact sequence:

0D 1×q .RD 1×p ψπD 1×(pq) 0.

If we consider the matrix which corresponds to the D-morphism ψπ in the canonical bases of D 1×p and D 1×(pq) , we then obtain a matrix QD p×(pq) such that:

λD 1×p :(ψπ)(λ)=λQ.

By 1 of Proposition 1, the previous exact sequence splits, i.e., we have

0D 1×q .RD 1×p .QD 1×(pq) 0, .S .T

(equivalentlty)

0D 1×q .R .SD 1×p .Q .TD 1×(pq) 0,

or, equivalently, there exists a matrix TD (pq)×p such that the following Bézout identities hold (see [5], [44], [50], [52], [57] for more details):

RT(SQ)=I q 00I pq ,(SQ)RT=I p .

In particular, we obtain that there exists a matrix U=(SQ) GL p (D) satisfying:

RU=(I q 0).

Finally, the family {π(T i )} 1ipq forms a basis of the free D-module M, where T i denotes the ith row of TD (pq)×p .

We are now in position to state the famous Quillen-Suslin theorem ([24], [25], [57]).

Theorem 6 ([56], [58])

(Quillen-Suslin theorem) Let A be a principal ideal domain (e.g., a field k) and D=A[x 1 ,...,x n ] a polynomial ring with coefficients in A. Moreover, let RD q×p be a matrix which admits a right-inverse SD p×q , i.e., RS=I q . Then, there exists a unimodular U GL p (D) satisfying:

RU=(I q 0).(12)

Using Lemma 1 and Theorem 6, we obtain the following important corollary.

Corollary 4 ([56], [58])

(Quillen-Suslin) Let A be a principal ideal domain (e.g., a field k) and D=A[x 1 ,...,x n ]. Then, every stably free D-module is free.

Moreover, the problem of finding a basis of a free finitely generated D-module M can be reformulated in terms of matrices as follows:

Problem 1

Let RD q×p be a matrix which admits a right-inverse over D=k[x 1 ,...,x n ]. Find a matrix U GL p (D) such that RU=(I q 0).

The previous problem is equivalent to completing R to a square invertible matrix:

U 1 =RTD p×p .

The Quillen-Suslin theorem states that Problem 1 has always a solution over a polynomial ring D=A[x 1 ,...,x n ] with coefficients in a principal ring A and, in particular, in a field k. In what follows, an algorithm which computes such a matrix U will be called a QS-algorithm.

Let us consider a matrix RD q×p which admits a right-inverse over D and let us denote by R i the ith row of R. We note that the row R 1 D 1×p admits a right-inverse over D. Let us suppose that we can find a matrix U 1 GL q (D) satisfying R 1 U=(10...0). Then, we have

RU 1 =10R 2 ,

where R 2 D (q1)×(p1) and ☆ denotes a certain element of D (q1)×1 . Hence, we restrict our considerations to the new matrix R 2 , which can easily be shown to admit a right-inverse over D, and reduce Problem 1 to the following one:

Problem 2

Let RD 1×p be a row vector which admits a right-inverse over D. Find a matrix U GL p (D) such that RU=(10...0).

The purpose of the next paragraphs is to recall a QS-algorithm solving Problem 2 over a commutative polynomial ring D=k[x 1 ,...,x n ] over a computable field k (for instance, k=Q). This algorithm was implemented in the package QuillenSuslin ([13]). See also the Appendix. We also point out that a QS-algorithm has also been implemented in QuillenSuslin for the case D=Z[x 1 ,...,x n ]. Even though there are some differences in the constructive proofs of the Quillen-Suslin theorem developed in [15], [19], [24], [27], [37], [59], [60], [62], we note that our algorithm is based on the same main idea, i.e., it proceeds by induction on the number of variables x i in D=k[x 1 ,...,x n ]. Each inductive step of the general QS-algorithm reduces the problem to the case with one variable less. A more global and interesting approach has recently been developed in [29], [61] which needs to be studied with care in the future.

3.3. Solution of Problem 2 in some special cases

Although the tedious inductive method, which will be explained in the next section, cannot generally be avoided, there are cases where simpler and faster heuristic methods can be used. We shall first consider such cases.

3.3.1. Matrices over a principal ideal domain D

We first consider the special case of matrices over a principal ideal domain D (e.g., D=k[x 1 ], k a field, Z). Let RD q×p be a matrix which admits a right-inverse over D. Then, computing the Smith normal form of R ([42]), we obtain two matrices F GL q (D) and G GL p (D) satisfying:

R=F(I q 0)G.

If we denote by r=pq, G=(G 1 T G 2 T ) T , where G 1 D q×p , G 2 D r×p and G 1 =(H 1 H 2 ), where H 1 D p×q , H 2 D p×r , then we get R=FG 1 , i.e., G 1 =F 1 R, and thus, we get

F 1 RG 2 (H 1 H 2 )=I p F 1 00I r RG 2 (H 1 H 2 )=I p ,RG 2 (H 1 H 2 )F00I r =I p RG 2 (H 1 FH 2 )=I p ,

which solves Problem 1 as we can take U=(H 1 FH 2 ) GL p (D) and T=G 2 .

3.3.2. (p1)×p matrices over an arbitrary commutative ring D

Let us consider the case of a matrix RD (p1)×p which admits a right-inverse over a commutative ring D. If we denote by m i the (p1)×(p1) minor of R obtained by removing the i th column of R, then, using the fact R admits right-inverse, we get that the family {m i } 1ip satisfies a Bézout identity i=1 p n i m i =1 for certain n i D and i=1,...,p. Let us denote by:

V=R(1) p+1 n 1 ...(1) 2p n p D p×p .

Expand the determinant of V along the last row, using the Laplace´s formula, we then get detV=1. Hence, if we denote by UD p×p the inverse of the matrix V, we then obtain RU=(I p1 0), which solves Problem 1.

3.3.3. 1×p rows over an arbitrary commutative ring D

We now consider Problem 2, i.e., the case of a row vector f=(f 1 ...f p )D 1×p which admits a right-inverse over an arbitrary commutative ring D.

Remark 3 (Special form of the row)

  1. We note that if one of the components of f is an invertible element of D, we can then transform the row f into (10...0) by means of trivial elementary operations. For instance, if f 1 1 D, then the matrix defined by

    W=f 1 1 00I p1

    satisfies detW=f 1 1 D and fW=(1f 2 ...f p ). Then, simple elementary operations transform fW into the vector (10...0).

  2. Another simple case is when two components of f generate D. Let us suppose that there exist h 1 and h 2 D such that we have the Bézout identity f 1 h 1 +f 2 h 2 =1 and let us define the following matrix:

    W=h 1 f 2 0h 2 f 1 000I p2 .

    We easily check that detW=1 and fW=(10f 3 ...f p ). Then, we can reduce fW to (10...0) by means of elementary operations.

  3. If the ith component of f is 0 or the ideal generated by the elements f 1 ,...,f i1 , f i+1 ,...,f p is already D, then we can follow an idea analogous to the one developed in [53], [55]. Let us suppose that i=1, i.e., f 1 is a redundant component in the sense that (f 2 ,...,f n )=D. Then, there exist h 2 ,...,h p D satisfying the Bézout equation i=2 p f i h i =1. Then, the matrix

    W=1(1f 1 )h 2 1(1f 1 )h p 1.

    satisfies fW=(1f 2 ...f p ) and detW=1. We can now reduce fW to (10...0) by means of elementary operations.

    In particular, this strategy is always successful when the length p of the row f exceeds the stable range of the ring D. We note that the stable range of D=R[x 1 ,...,x n ] is equal to n+1. We refer the reader to [53], [55] for more details.

We note that all the conditions given in Remark 3 can be checked using Gröbner or Janet bases.

The matrix U can also be easily computed in cases where a right-inverse g of the row f has a special form.

Remark 4 (Special form of the right-inverse)

Let gD p×1 be the right-inverse of the unimodular row fD 1×p , i.e., fg=1.

  1. Let us suppose that one of the entries of a right-inverse g of f, say g 1 , is invertible in D. Then, the following matrix

    W=g 1 g 2 1g p 1

    satisfies detW=g 1 and fW=(1f 2 ...f p ). As g 1 is an invertible element of D, then W is a unimodular matrix and fW can easily be transformed into (10...0) by means of elementary operations.

  2. If two components g 1 ,g 2 of g generate the whole ring D, then there exist elements h 1 ,h 2 D such that g 1 h 1 +g 2 h 2 =1. Then, the matrix defined by

    W=g 1 h 2 g 2 h 1 g 3 1g p 1

    satisfies detW=1 and fW=(1f 3 ...f p ), where ☆ denotes a certain element of D. We can then reduce fW to (10...0) by means of elementary operations.

Finally, we also note that if fD 1×p admits a right-inverse g over D for which any of the heuristic methods explained in Remark 3 may be used for g T , then a unimodular matrix V having g T as a first row can be easily computed. Then, the product fV T =(1...) can be reduced to the first standard basis vector by elementary column operations.

For instance, let us illustrate 1 of Remark 4. In some of the illustrating examples, we shall also use the notation D=k[z 1 ,...,z n ] as these examples come from the control theory and signal processing literatures where z i is commonly used. The independent variables z i , i=1,...,n, usually denote the variables appearing in the discrete Laplace transform.

Example 4

Let us consider D=Q[z 1 ,z 2 ,z 3 ] and the following row vector:

R=(z 1 2 z 2 2 +1z 1 2 z 3 +1z 1 z 2 2 z 3 ).

We can easily check that R admits the following right-inverse S=(z 1 2 z 3 1z 1 3 ) T . As the second component of S is invertible over D, we can apply 1 of Remark 4 in order to find a unimodular matrix U over D which satisfies RU=(100). Let us define the following elementary matrices:

U 1 =010100001,U 2 =100z 1 2 z 3 10z 1 3 01.

We then have R(U 1 U 2 )=(1z 1 2 z 2 2 +1z 1 z 2 2 z 3 ). Finally, if we denote by

U 3 =1z 1 2 z 2 2 1z 1 z 2 2 z 3 010001,

we then have RU=(100), where the unimodular U=U 1 U 2 U 3 is defined by:

U=z 1 2 z 3 z 1 4 z 2 2 z 3 +z 1 2 z 3 +1z 1 3 z 2 2 z 3 2 1z 1 2 z 2 2 1z 1 z 2 2 z 3 z 1 3 z 1 3 (z 1 2 z 2 2 +1)z 1 4 z 2 2 z 3 +1.(13)

3.4. A QS-algorithm for commutative polynomial rings

Over an arbitrary commutative ring A, not every row admitting a right-inverse can be completed to a unimodular matrix over A. The module-theoretic interpretation of this result is that, over certain rings, there exist stably free modules which are not free. For instance, using a classical topological theorem on vector fields on the sphere S 2 (R), we can prove that the row vector R=(x 1 x 2 x 3 ) with entries in the commutative ring D=R[x 1 ,x 2 ,x 3 ]/(x 1 2 +x 2 2 +x 3 2 1), which admits the right-inverse R T , cannot be completed to a unimodular matrix over D. For more details, see [25].

However, it is always possible over a polynomial ring with coefficients on a field k or a principal ideal domain A. See Quillen-Suslin Theorem 6. We shortly describe a QS-algorithm which has recently been implemented in a package called QuillenSuslin ([13]). See the Appendix for more details. In what follows, we shall only consider a commutative polynomial ring D=k[x 1 ,...,x n ] over a field k even if the extension of the algorithms exists when k is replaced by a principal ideal ring A. For instance, the case of A=Z has also been implemented in QuillenSuslin. Let fD 1×p be a row vector which admits a right-inverse g over D. When no method explained in Section 3.3 can be applied to f, we then need to consider a general algorithm. However, we point out that most of the examples we know do not require the general algorithm as the previous heuristic methods are generally enough to get the result.

The QS-algorithm proceeds by induction on the number n of independent variables x i of the ring D=k[x 1 ,...,x n ]. Each inductive step, which simplifies the problem to the case of a polynomial ring containing one variable less, consists of three main parts:

  1. Finding a normalized component in the last variable of the polynomial ring.

  2. Computing finitely many local solutions of Problem 2 over certain local rings (local loop).

  3. Patching/glueing the local solutions together in order to obtain a global one.

3.4.1. Normalization Step

The next lemma is essential for Horrocks´ theorem which is used in the local loop.

Lemma 2 ([57], [60])

Let us consider ak[y 1 ,...,y n ] and let us denote by m=deg(a)+1, where deg(a) denotes the total degree of a. Using the following invertible transformation

x n =y n ,x i =y i y n m ni ,1in1,y n =x n ,y i =x i +x n m ni ,1in1,

we obtain a(y 1 ,...,y n )=rb(x 1 ,...,x n ), where 0rk and b is a monic polynomial in x n with coefficients in the ring E=k[x 1 ,...,x n1 ], namely, the leading coefficient of bE[x n ] is 1.

In the case where k is an infinite field, we can achieve the same result by using only a linear transformation whose coefficients are appropriately chosen ([60], [62]). The normalization step can also be generalized to the case D=A[x 1 ,...,x n ], where A is is a principal ideal domain. See [59] for more details.

3.4.2. Local Loop

In the second step, we need to compute a finite number of local solutions of Problem 2 over a local ring ring A, namely, a ring A which has only one maximal ideal, i.e., a proper ideal M of A which is not properly contained in any ideal of A other than A itself. In order to do that, we use the so-called Horrocks´ theorem. Let us recall it.

Theorem 7 ([57], [60])

Let B be a local ring and f a row vector which admits a right-inverse over B[y]. If one of the components f i of f is monic, then there exists a unimodular matrix U over B[y] such that f is the first row of U or, equivalently, such that fU 1 =(10...0).

Horrocks´ theorem can easily be implemented using, for instance, the approaches developed in [27], [57], [62]. In particular, the implementation in QuillenSuslin of this theorem follows [57]. If M is a maximal ideal of D, we then denote by D M the local ring, which is a standard localization of D with respect to the multiplicative closed subset S=D\M of D, namely, D M ={a/b|aD,bM} ([57]).

We can now give the first main part of the general algorithm ([27], [62]).

Algorithm 1

  1. Let M 1 be an arbitrary maximal ideal of the ring E. Using Horrocks´ theorem, compute a unimodular matrix H 1 over E M 1 [x n ] which satisfies fH 1 =(10...0).

  2. Let d 1 E be the denominator of H 1 and J the ideal in E generated by d 1 . Set i=1.

  3. While JE, do:

    1. For i:=i+1, compute a maximal ideal M i of E such that JM i .

    2. Using Horrocks´ theorem, compute a matrix H i over the ring E M i [x n ] such that detH i is invertible in E M i [x n ] and fH i =(10...0).

    3. Let d i be the denominator of the matrix H i and consider the ideal J=(d 1 ,...,d i ).

  4. Return {M i } iI , {H i } iI and {d i } iI .

The local loop stops when all the denominators d i generate E. As the ring E is noetherian ([57]), the number of the local solutions, i.e., the cardinal of the set I, is finite.

3.4.3. Patching

To obtain a polynomial solution of Problem 1, we use the following lemma.

Lemma 3 ([27])

Let fD 1×p be a row vector which admits a right-inverse over D=k[x 1 ,...,x n ] and U a unimodular matrix over k[x 1 ,...,x n1 ] M [x n ], where M is a certain maximal ideal of the ring E=k[x 1 ,...,x n1 ], which satisfies fU=(10...0). Let us denote by dE the denominator of U. Then, the matrix defined by

Δ(x n ,z)=U(x 1 ,...,x n )U 1 (x 1 ,...,x n1 ,x n +z)(E M [x n ,z]) p×p

is such that

zD:f(x 1 ,...,x n )Δ(x n ,z)=f(x 1 ,...,x n1 ,x n +z),(14)

and its denominator is d α with 0αp.

(14) is clear as the identity f(x 1 ,...,x n )U(x 1 ,...,x n )=(10...0) implies that we have f(x 1 ,...,x n +z)U(x 1 ,...,x n +z)=(10...0) and then:

f(x 1 ,...,x n +z)=f(x 1 ,...,x n )U(x 1 ,...,x n )U(x 1 ,...,x n +z) 1 .

Moreover, using the standard formula U 1 =(detU) 1 adj (U), where adj (U) denotes the adjugate of U, we can also prove that the common denominator of Δ(x n ,z) is d α , where 0αp.

Let {M i } iI , {H i } iI and {d i } iI be the output of Algorithm 1, where I is a finite set. Let us set I={1,...,l}. The ideal of E=k[x 1 ,...,x n1 ] defined by {d i } iI generates E. Hence, there exists c i E, iI, such that the Bézout identity holds:

i=1 l c i d i p =1.

Let us define the following matrices

Δ i (x n ,z)=H i (x 1 ,...,x n )H i 1 (x 1 ,...,x n1 ,x n +z),i=1,...,l,

and, in order to simplify the notations, we denote by f ˜(x n ) the function f(x 1 ,...,x n ). Then, we have:

f ˜(x n )Δ 1 (x n ,(a n x n )c 1 d 1 p )=f ˜(x n +(a n x n )c 1 d 1 p ),f ˜(x n +(a n x n )c 1 d 1 p )Δ 2 (x n +(a n x n )c 1 d 1 p ,(a n x n )c 2 d 2 p )=f ˜x n +(a n x n ) i=1 2 c i d i p ,...f ˜x n +(a n x n ) i=1 l1 c i d i p Δ l x n +(a n x n ) i=1 l1 c i d i p ,(a n x n )c l d l p =f ˜(a n ).

Finally, we can prove that we have Δ i (x n ,d i p z) GL p (D), i=1,...,l, ([27]) and:

U 1 =Δ 1 (x n ,(a n x n )c 1 d 1 p )Δ 2 (x n +(a n x n )c 1 d 1 p ,(a n x n )c 2 d 2 p )...Δ l x n +(a n x n ) i=1 l1 c i d i p ,(a n x n )c l d l p GL p (D).

The previous computations then show that f(x 1 ,...,x n )U 1 =f(x 1 ,...,x n1 ,a n ).

We can now state the main result.

Theorem 8 ([27], [57], [60], [62])

Let fD 1×p be a row vector which admits a right-inverse over the ring D=k[x 1 ,...,x n ]. Then, for every ak, there exists a matrix U GL p (D) such that:

f(x 1 ,...,x n )U(x 1 ,...,x n )=f(x 1 ,...,x n1 ,a).

We consider a row vector f(x 1 ,...,x n )D 1×p admitting a right-inverse g(x 1 ,...,x n )D p×1 . Applying inductively Theorem 8 to f(x 1 ,...,x n ) for the values a 2 ,...,a n k, we then obtain U 1 ,...,U n1 GL p (D) such that

f(x 1 ,...,x n )U 1 =f(x 1 ,...,x n1 ,a n ),f(x 1 ,...,x ni ,a ni+1 ,...,a n )U i+1 =f(x 1 ,...,x ni1 ,a ni ,...,a n ),1in2.

Hence, we get f(x 1 ,...,x n )(U 1 ...U n1 )=f(x 1 ,a 2 ,...,a n ) and we have simplified Problem 2 to the case of a row vector f(x 1 ,a 2 ,...,a n ) over a principal ideal domain k[x 1 ] which admits a right-inverse g(x 1 ,a 2 ,...,a n ) over k[x 1 ]. Using the first result of Section 3.3, we can find a matrix U n GL p (D) such that:

f(x 1 ,a 2 ,...,a n )U n (x 1 )=(10...0)U n 1 (x 1 )=f(x 1 ,a 2 ,...,a n ).

Hence, Problem 2 is then solved if we take U=U 1 ...U n GL p (D). We also note that it is generally simpler to take the particular values a 2 =...=a n =0.

Now, let us find a matrix U satisfying f(x 1 ,...,x n )U =f(a 1 ,...,a n ), where a 1 k. Let us define by U n (x 1 )=U n (x 1 )U n 1 (a 1 ) GL p (D). Then, we have:

f(x 1 ,a 2 ,...,a n )U n (x 1 )=(10...0)f(a 1 ,a 2 ,...,a n )=f(a 1 ,a 2 ,...,a n ).

Hence, the matrix U =U 1 ...U n1 U n GL p (D) satisfies:

f(x 1 ,...,x n )U =f(a 1 ,...,a n ).

Let us illustrate the QS-algorithm on a simple example.

Example 5

Let us consider the commutative polynomial ring D=Q[x 1 ,x 2 ] and the row vector R=(x 1 x 2 2 +13x 2 /2+x 1 12x 1 x 2 )D 1×3 . We can check that S=(10x 1 /2) T is a right-inverse of R, a fact implying that the D-module M=D 1×3 /(DR) is projective, and thus, free by the Quillen-Suslin theorem. Let us compute a matrix U GL 3 (D) such that RU=(100). As the first component of S is 1, we can easily find such a matrix U using the heuristic methods explained in Section 3.3. However, let us illustrate the main algorithm previously described.

We first note that R contains the normalized component 3x 2 /2+x 1 1 over D=E[x 2 ], where E=Q[x 1 ]. The second step consists in computing certain local solutions. Let us consider the maximal ideal M 1 =(x 1 ) of E. Using an effective version of Horrocks´ theorem, we obtain that

H 1 =1 d 1 42(3x 1 +2x 2 2)4x 1 (3x 1 2)2x 1 (3x 1 2x 2 2)4(x 1 x 2 2 +1)4x 1 (3x 1 2 x 2 2x 1 x 2 +2)009x 1 3 12x 1 2 +4x 1 +4,

where d 1 =9x 1 3 12x 1 2 +4x 1 +4M 1 . We can check that detH 1 =4/d 1 , i.e., H 1 GL 3 (E M 1 [x 2 ]), and RH 1 =(100), showing that H 1 is a local solution.

The ideal J=(d 1 ) is strictly contained in E. Therefore, we consider another maximal ideal M 2 such that JM 2 . For instance, we can take M 2 =(9x 1 3 12x 1 2 +4x 1 +4). Using an effective version of Horrocks´ theorem, we obtain the matrix

H 2 =1 d 2 004x 1 (3x 1 2)8x 1 8x 1 x 2 4x 1 (3x 1 2 x 2 2x 1 x 2 +2)42(3x 1 +2x 2 2)9x 1 3 12x 1 2 +4x 1 +4,

where d 2 =4x 1 (3x 1 2)M 2 . We then have detH 2 =1/(x 1 (3x 1 2)), i.e., H 2 GL 3 (E M 2 [x 2 ]) and RH 2 =(100). We can check that the ideal (d 1 ,d 2 )=E as we have the Bézout identity c 1 d 1 +c 2 d 2 =1, where c 1 =1/4 and c 2 =(3x 1 2)/16.

The matrix Δ i (x 2 ,c 1 d 1 x 2 ) is defined by:

(9x 1 4 /43x 1 3 +x 1 2 )x 2 2 +(3x 1 2 /2x 1 )x 2 +1(18x 1 4 24x 1 3 +8x 1 2 )x 1 x 2 3 /8+(27x 1 5 54x 1 4 +36x 1 3 20x 1 2 +8x 1 )x 1 x 2 2 /8x 1 x 2 0x 2 2x 1 x 2 x 1 x 2 2 +(3x 1 2 /2+x 1 )x 2 +12x 1 2 x 2 2 x 1 2 (3x 1 2)x 2 01.

We can easily check that we have R(x 1 ,x 2 )Δ i (x 2 ,c 1 d 1 x 2 )=R(x 1 ,x 2 c 1 d 1 x 2 ) as well as Δ 1 (x 2 ,c 1 d 1 x 2 ) GL 3 (D). Moreover, the matrix Δ 2 (x 2 c 1 d 1 x 2 ,c 2 d 2 x 2 ) is defined by:

1000(3x 1 2 /2x 1 )x 2 +1x 1 2 (3x 1 2)x 2 (9x 1 2 12x 1 +4)x 1 x 2 /8(3x 1 +2)x 2 /4(3x 1 2 /2+x 1 )x 2 +1.

We can easily check that we have R(x 1 ,x 2 c 1 d 1 x 2 )Δ 2 (x 2 c 1 d 1 x 2 ,c 2 d 2 x 2 )=R(x 1 ,0) and Δ 2 (x 2 c 1 d 1 x 2 ,c 2 d 2 x 2 ) GL 3 (D). Defining the matrix

U 1 =Δ 1 (x 2 ,c 1 d 1 x 2 )Δ 2 (x 2 c 1 d 1 x 2 ,c 2 d 2 x 2 ) GL 3 (D),

we then get R(x 1 ,x 2 )U 1 (x 1 ,x 2 )=R(x 1 ,0)=(13x 1 /210).

Finally, if we denote by

U 2 =13x 1 /2+10010001 GL 3 (D),

then, the matrix R(x 1 ,0) is then equivalent to (100), i.e., R(x 1 ,0)U 2 =(100). Hence, if we define the matrix U=U 1 U 2 GL 3 (D), i.e.,

(3x 1 2 /2x 1 )x 2 +1(9x 1 3 /4+3x 1 2 x 1 1)x 2 3x 1 /2+12x 1 x 2 (3x 1 3 /2+x 1 2 )x 2 2 x 1 x 2 (9x 1 4 /43x 1 3 +x 1 2 +x 1 )x 2 2 +(3x 1 2 /2x 1 )x 2 +12x 1 2 x 2 2 (9x 1 2 12x 1 +4)x 1 x 2 /8(27x 1 4 /16+27x 1 3 /89x 1 2 /4x 1 /4+1/2)x 2 (3x 1 2 /2+x 1 )x 2 +1,

we finally obtain RU=(100).

In the third point of Section 3.3, we saw that the case of a matrix RD q×p admitting a right-inverse over D can be solved by applying q times Theorem 8 on certain row vectors obtained during the process having smaller and smaller lengths. Hence, we obtain the following corollary.

Corollary 5 ([27], [57], [60], [62])

Let RD q×p be a matrix which admits a right-inverse over D. Then, for all a 1 ,...,a n k, there exists U GL p (D) such that:

R(x 1 ,...,x n )U(x 1 ,...,x n )=R(a 1 ,...,a n ).

We note that as the matrix R(a 1 ,...,a n ) has full row rank over a field k, there always exists a right-inverse Vk p×q such that R(a 1 ,...,a n )V=I q . Hence, we obtain that R(UV)=(I q 0), which also solves Problem 1. Another possibility is to first obtain a matrix W GL p (D) such that R(x 1 ,...,x n )W=R(x 1 ,a 2 ,...,a n ) and then compute a Smith form of R(x 1 ,a 2 ,...,a n ) as we did for the row vector case.

Remark 5

In [38], it was shown how a certain transformation maps a matrix R with entries in a Laurent polynomial ring D=k[x 1 ,...,x n ,x 1 1 ,...,x n 1 ], where k is a field, and which admits a right-inverse over D to a matrix R ¯ with entries in D ¯=k[x 1 ,...,x n ] and which admits a right-inverse over D ¯. Hence, we can use a QS-algorithm to solve Problems 2 and 1 over D. See [38] for more details. See also Section 9.3 for explicit examples. Finally, a new algorithm has recently been developed in [1].

3.4.4. Computation of bases of free modules

If RD q×p is a matrix which admits a right-inverse over D, then, in Section 3.2, we showed that a basis of the free D-module M=D 1×p /(D 1×q R) is defined by {π(T i )} 1i(pq) , where π:D 1×p M denotes the canonical projection on M and T i is the i th row of the matrix TD (pq)×p defined by:

U 1 =RT GL p (D).

Example 6

Let us consider again Example 5. If we consider d i =/x i instead of x i , namely, D=Q[d 1 ,d 2 ], R=(d 1 d 2 2 +13d 2 /2+d 1 12d 1 d 2 )D 1×3 , denote by x=(x 1 ,x 2 ,x 3 ) and choose F=C (R 3 ), we then obtain that the linear system of PDEs

ker F (R.)={y=(y 1 y 2 y 3 ) T F 3 |d 1 d 2 2 y 1 (x)+y 1 (x)+3 2d 2 y 2 (x)+d 1 y 2 (x)y 2 (x)+2d 1 d 2 y 3 (x)=0}

admits the parametrization (y 1 (x)y 2 (x)y 3 (x)) T =Q(z 1 (x)z 2 (x)) T , where Q is the matrix of differential operators formed by the last two columns of the matrix U defined in Example 5 and z=(z 1 z 2 ) T is any arbitrary element of F 2 , i.e.:

y 1 =(9 4d 1 3 +3d 1 2 d 1 1)d 2 z 1 3 2d 1 z 1 +z 1 2d 1 d 2 z 2 ,y 2 =9 4d 1 4 3d 1 3 +d 1 2 +d 1 d 2 2 z 1 +3 2d 1 2 d 1 d 2 z 1 +z 1 +2d 1 2 d 2 2 z 2 ,y 3 =27 16d 1 4 +27 8d 1 3 9 4d 1 2 1 4d 1 +1 2d 2 z 1 +3 2d 1 2 +d 1 d 2 z 2 +z 2 .

Finally, if we denote by TD 2×3 the matrix formed by the last two rows of the matrix U 1 , namely,

d 1 d 2 101 4(3d 1 2 2d 1 )d 2 2 +1 8(9d 1 3 +12d 1 2 4d 1 )d 2 1 4(3d 1 2)d 2 1 2(3d 1 2 2d 1 )d 2 ,

we then have TQ=I 2 , i.e., the parametrization Q of ker F (R.) is injective.

Now, if M=D 1×p /(D 1×q R) is a projective D which is defined by a non full row rank matrix RD q ×p , then, using Proposition 2, we first compute a full row rank matrix R D q ×p satisfying

MM =D 1×p /(D 1×p R ),

and we then apply the previous QS-algorithm to R D q ×p to obtain U GL p (D) such that R U=(I q 0). Let S D p ×q , Q D p ×(p q ) , T D (p q )×p be the matrices defined by:

U=(S Q ),U 1 =R T .

Then, we have the following split exact sequence:

0D 1×q .R D 1×p .Q D 1×(p q ) 0. .S .T (15)

We now need to precisely describe the isomorphism between M and M in order to get a basis of M from one of M . In order to do that, we take the same notations as the ones used at the end of Section 3.1, namely, R 1 =R, T 1 =(R 1 T 0 T ) T , R =(T 1 S 2 ), p 0 =p, p 1 =q, q =p 1 +p 3 , p =p 0 +p 2 . We first easily check that we have the following commutative exact diagram

D 1×p 1 .R 1 D 1×p 0 πM0.X.I p 0 id M D 1×(p 1 +p 3 ) .T 1 D 1×p 0 πM0,

where X=(I q T 0 T ) T . Moreover, we also have the commutative exact diagram

D 1×(p 1 +p 3 ) .T 1 D 1×p 0 πM0.Z.YσD 1×(p 1 +p 3 ) .R D 1×(p 0 +p 2 ) π M 0,

where Y=(I p 0 T 0 T ) T , Z=(I p 1 T 0 T ) T and the isomorphism σ is defined by:

m =π (λ),λ=(λ 1 λ 2 )D 1×(p 0 +p 2 ) ,σ(m )=π(λ 1 ).

Combining the two commutative exact diagrams, we then obtain the following one:

D 1×p 1 .R 1 D 1×p 0 πM0.(ZX).YσD 1×(p 1 +p 3 ) .R D 1×(p 0 +p 2 ) π M 0.

Hence, if we denote by {f i } 1i(p q ) the standard basis of D 1×(p q ) , using (15), we then obtain that {σ(π (f i T ))=π(f i (T Y))} 1i(p q ) is a basis of M, i.e., a basis of M is defined by taking the residue classes of the rows of (T Y)D (p q )×p 0 .

We can check that the D-morphism σ 1 :MM is defined by:

m=π(λ),λD 1×p 0 ,σ 1 (m)=π (λY T ).

Then, using (15), we then obtain the following split exact sequence

D 1×q .RD 1×p .(Y T Q )D 1×(p q ) 0, .S .(T Y)

where SD p×q is a generalized inverse of R, i.e., S satisfies RSR=R ([44]). If we denote by T =(T 1 T 2 ), where T 1 D (p q )×p and T 2 D (p q )×p 2 and Q =((Q 1 ) T (Q 2 ) T ) T , where Q 1 D p×(p q ) and Q 2 D p 2 ×(p q ) , we then get

Y T Q =Q 1 ,T Y=T 1 ,

i.e., we need to select the first p columns of T and the first p rows of Q .

Remark 6

If the free D-module M=D 1×p /(D 1×q R) is defined by the finite free resolution (8), where R 1 =R, p 0 =p and p 1 =q, we point out that we only apply once the QS-algorithm to the matrix R in order to obtain a basis of M contrary to the algorithm developed in [27] where the QS-algorithm is applied m times. Hence, our algorithm is generally more efficient than the one developed in [27].

If F is a D-module, then applying the functor hom D (·,F) to the previous split exact sequence, by 2 of Proposition 1, we then obtain the following split exact sequence:

F q R.F p Q 1 .F (p q ) 0. S. T 1 .

The system ker F (R.) admits the injective parametrization Q 1 , namely:

ker F (R.)=Q 1 F (p q ) ,T 1 Q 1 =I p q .

Remark 7

Let us consider RD q×p and let us suppose that the D-modules im D (.R), ker D (.R) and coim D (.R)D 1×q /ker D (.R) are free. We now show how to use the previous results to compute a basis of these free D-modules:

  1. A basis of im D (.R)=D 1×q R can be obtained as follows: we first compute the first syzygy D-module of im D (.R) and we obtain a matrix R 2 D r×q satisfying ker D (.R)=D 1×r R 2 . Let us denote by M 2 =D 1×q /(D 1×r R 2 )D 1×q R. Using the method previously described, we can compute a basis of the free D-module M 2 . We get Q 2 D q×l and T 2 D l×q such that we have the exact split sequence

    D 1×r .R 2 D 1×q .Q 2 D 1×l 0, .S 2 .T 2

    where S 2 D q×r denotes a generalized inverse of R 2 . A basis of D 1×q R is then given by the D-linearly independent rows of the matrix T 2 RD l×p and we have D 1×q R=D 1×l (T 2 R).

  2. Using the same notations as before, we have ker D (.R)=D 1×r R 2 and a basis of the free D-module ker D (.R) can then be obtained by computing a basis of D 1×r R 2 as it was shown in the previous point.

  3. Using again the same notations as in the first point, we get

    coim D (.R)=D 1×q /ker D (.R)=D 1×q /(D 1×r R 2 ),

    and a basis of coim D (.R) can be computed using the general method previously described in this section.

To finish, all the algorithms presented in this section were implemented in the package QuillenSuslin ([13]). See the Appendix for more details and examples.

4. Flat multidimensional linear systems

4.1. Computation of flat outputs of flat multidimensional systems

Our first motivation to study and implement constructive versions of the Quillen-Suslin theorem was the computation of flat outputs and injective parametrizations of flat multidimensional linear systems and, particularly, differential time-delay systems. Let us first recall the main ideas of flat systems and their applications in control theory.

A non-linear ordinary differential control system defined by x ˙=f(x,u) is said to be flat if there exist some outputs y of the form y=h(x,u,u ˙,...,u (r) ) such that we have:

x=ϕ(y,y ˙,...,y (s) ),u=φ(y,y ˙,...,y (s) ).

The outputs y is then called flat outputs of the control system x ˙=f(x,u). See [16], [17] and the references therein for more details and references. We can prove that the trajectories of a flat system are in a one-to-one correspondence with those of a controllable linear ordinary differential system having an arbitrary state dimension but the same number of inputs, i.e., with those of a Brunovský canoncial system ([17]). We say that a flat non-linear system is Lie-Bäcklund equivalent to a controllable linear ordinary differential system ([17]). Controllable linear systems form the simplest class of systems studied in control theory and a large literature is developed for the analysis and the synthesis of this class of control systems. This result, as well as the fact that many classes of non-linear control systems commonly used in the literature were proved to be flat, has popularized this class of systems in the control theory community. The motion planning problem was shown to be easily tractable for flat systems and it was illustrated on several examples in the literature ([16], [17]). Finally, the fact that the trajectories of a flat non-linear systems are in a one-to-one correspondence with the ones of a linear controllable system can be used to construct feedback laws which stabilize a flat non-linear system around a given trajectory (tracking problem) ([16], [17]). See also [48] for applications to optimal control.

Unfortunately, no general algorithm is known for checking whether or not a non-linear control system is flat and for the computation of flat outputs despite many effort of the mathematical and control theory communities. We refer the reader to [67] for a historical account of the main developments of the underlying mathematical problem, the Monge problem, which was studied by Hadamard, Hilbert, Cartan and Goursat.

We illustrate these definitions on the model of a vertical take-off and landing aircraft considered in [17], namely,

x ¨(t)=u 1 (t)sinθ(t)εu 2 (t)cosθ(t),z ¨(t)=u 1 (t)cosθ(t)+εu 2 (t)sinθ(t)1,θ ¨(t)=u 2 (t),(16)

where ε is a small parameter. It is proved in [17] that the smooth solutions of (16) can be parametrized by means of the following non-linear differential operator

y 1 y 2 x=y 1 εy ¨ 1 (y ¨ 1 ) 2 +(y ¨ 2 +1) 2 ,z=y 2 εy ¨ 2 +1 (y ¨ 1 ) 2 +(y ¨ 2 +1) 2 ,θ= arctan y ¨ 1 y ¨ 2 +1,(17)

where y 1 and y 2 are two arbitrary smooth functions satisfying the following condition:

tR + ,(y ¨ 1 (t)) 2 +(y ¨ 2 (t)+1) 2 0.

Moreover, the arbitrary functions y 1 and y 2 can be expressed in terms of the system variables as follows:

y 1 =x+εsinθ,y 2 =z+εcosθ.(18)

Hence, (y 1 ,y 2 ) is a flat output of the non-linear system (16) and its knowledge gives a way to generate the trajectories of (16). Finally, the flat ordinary differential system (16) is Lie-Bäcklund equivalent to the Brunovský linear system defined by

y 1 (4) =v 1 ,y 2 (4) =v 2 ,(19)

under the invertible transformation (η 1 =u 1 εθ ˙ 2 and η 2 =η ˙ 1 ):

y 1 =x+εsinθ,y 2 =z+εcosθ,v 1 =η ˙ 2 sinθ+2η 2 θ ˙cosθ+η 1 u 2 cosθη 1 θ ˙ 2 sinθ,v 2 =η ˙ 2 cosθ2η 2 θ ˙sinθη 1 u 2 sinθη 1 θ ˙ 2 cosθ.

The study of flat linear ordinary differential time-delay systems has recently been initiated in [18], [32]. As for non-linear ordinary differential systems, this class of systems shares some interesting mathematical properties which can be used to do motion planning and tracking as shown in [32] and the references therein on explicit examples. However, the theory of flat linear ordinary differential time-delay systems is still in its infancy and some concepts developed for non-linear ordinary differential systems seem to have no counterparts for this second class of systems. In particular, for flat linear differential time-delay systems, we can wonder which kind of linear systems could play a similar role as the one played by the linear controllable systems (or Brunovský systems) for flat non-linear systems. To answer this question, we first need to understand which kind of equivalence plays a similar role for differential time-delay linear systems as the one played by the Lie-Bäcklund equivalence for non-linear differential systems. To our knowledge, these important questions have not be tackled in the literature till now. This section aims at constructively answer these two questions.

As the differential time-delay systems is a particular class of multidimensional systems, we can define the concept of a flat multidimensional linear system in terms of the existence of an injective parametrization of the trajectories of the system ([5], [44], [65]).

Definition 7

Let D=k[x 1 ,...,x n ], RD q×p and F a D-module. Then, the system ker F (R.) is called flat if there exist QD p×m and TD m×p satisfying:

ker F (R.)=QF m ,TQ=I m .

In terms of the module-theoretic/behaviour approach recently developed for multidimensional linear systems ([5], [41], [34], [65], [66]), it means that the module M intrinsically associated with the multidimensional linear system is free over the commutative polynomial ring D of functional operators ([5], [16], [17], [32], [44]).

Proposition 3 ([5])

Let D=k[x 1 ,...,x n ], RD q×p , M=D 1×p /(D 1×q R) and F be an injective cogenerator D-module. Then, ker F (R.) is a flat system iff the D-module M is free. Moreover, the bases of the D-module M are then in a one-to-one correspondence with flat outputs of ker F (R.).

Remark 8

Using the end of the Section 2, we obtain that the condition that M is a free D-module is a sufficient condition for ker F (R.) to be a flat system.

Using Proposition 3 and the Quillen-Suslin theorem (see 4 of Theorem 2), we then get the following important corollary.

Corollary 6

Let D=k[x 1 ,...,x n ], RD q×p , M=D 1×p /(D 1×q R) and F be an injective cogenerator D-module. Then, ker F (R.) is a flat system iff the D-module M is projective.

When R has a full row rank, then, using Theorem 3, a constructive test for flatness of multidimensional linear systems with constant coefficients consists in checking if the q×q minors of R do not simultaneously vanish on complex common zeros ([24], [62]). This last result can algorithmically be checked by computing a Gröbner or Janet basis of the ideal I of D generated by the q×q minors of R and check whether or not 1I. We can also check whether or not R admits a right-inverse over D ([4], [5], [44]).

In the general case, using Theorem 3, the projectiveness of M can constructively be obtained by verifying the vanishing of ext D i (N,D), for i=1,...,n, where N is the transposed D-module N=D 1×q /(D 1×p R T ). Other possibilities are to compute the so-called global dimension of M ([57]) by means of Proposition 2 and Corollary 2 as it was shown in [53], check whether or not R admits a generalized inverse S over D, i.e., check for the existence of a matrix SD p×q satisfying RSR=R ([44]) or check some straightforward conditions on the so-called Fitting ideals of M as it is explained in [11].

However, we point out that, till now, there has been no easy way for obtaining the flat outputs of the system, i.e., the bases of the free D-module M. Hence, we are led to use constructive versions of the Quillen-Suslin theorem developed in the symbolic algebra community ([19], [27], [29], [37]) for computing a basis of the free D-module M. It was our first main purpose for developing the package QuillenSuslin ([13]). See the Appendix for more details and examples.

Example 7

Let us consider the following differential time-delay linear system ([32]):

y ˙ 1 (t)y 1 (th)+2y 1 (t)+2y 2 (t)2u(th)=0,y ˙ 1 (t)+y ˙ 2 (t)u ˙(th)u(t)=0.(20)

Let us denote by D=Qd dt,δ the commutative ring of differential time-delay operators with rational coefficients, where (d/dt)y(t)=y ˙(t) and (δy)(t)=y(th), hR + . Let us also denote the matrix of functional operators defining (20) by:

R=d dtδ+222δd dtd dtd dtδ1D 2×3 .

Using the algorithms developed in [5], [44] and implemented in the package OreModules ([4]), we obtain that R admits a right-inverse over D defined by

S=1 200d dtδ+22δd dt2,

a fact proving that M=D 1×3 /(D 1×2 R) is a projective, and thus, a free D-module by the Quillen-Suslin theorem (see 4 of Theorem 2).

Using a constructive version of the Quillen-Suslin theorem (see also the heuristic methods developed in [5], [44]), we obtain the following split exact sequence of D-modules

0D 1×2 .RD 1×3 .QD0, .S .T(21)

where T=100 and Q=1 22d 2 dt 2 δ+d dtδ 2 d dt+δ2d dtδd 2 dt 2 .

Using the split exact sequence (21), we can check that we have

M=D 1×3 /(D 1×2 R)(D 1×3 Q)=D,

i.e., we find again that M is a free D-module of rank 1.

Now, if F is a D-module (e.g., F=C (R)), by applying the functor hom D (·,F) to the split exact sequence (21), we then obtain the following split exact sequence of D-modules (see 2 of Proposition 1):

0F 2 R.F 3 Q.F0. S. T.

Hence, for any D-module F, we get that the system ker F (R.) defined by (20) is parametrized by the following injective parametrization:

x 1 F,y 1 (t)=x 1 (t),y 2 (t)=1 2(x ¨ 1 (th)+x ˙ 1 (t2h)x ˙ 1 (t)+x 1 (th)2x 1 (t)),u(t)=1 2(x ˙ 1 (th)x ¨ 1 (t)).(22)

We refer the reader to [53], [55] for a constructive algorithm for the computation of bases, and thus, of flat outputs of a class of linear systems defined by partial differential equations with polynomial or rational coefficients. See [54], [53] for an implementation of this algorithm in the package Stafford of the library OreModules.

Finally, we say that the D=k[x 1 ,...,x n ]-module M=D 1×p /(D 1×q R) is π-free, where πD, if the D π -module D π D M is free, where D π denotes the localization

D π ={a/b|aD,b=π i ,iZ + }

of the ring D with respect to the multiplicatively closed subset S π ={1,π,π 2 ,...} of D ([57]). By extension, we can define the concept of a π-flat system. See [5], [32], [33] for more details. Given a finitely presented D=k[x 1 ,...,x n ]-module M=D 1×p /(D 1×q R), constructive algorithms computing the corresponding polynomials π and basis of the free D π -module D π D M were given in [5] and implemented in the OreModules package ([4], [6]). However, we can also use Remark 5 to compute the corresponding basis in the case where π=x i . We can also follow the simple idea developed in Section 9.4.3.

4.2. Equivalences of flat multidimensional systems

Using a QS-algorithm, the purpose of this section is to prove that a flat multidimensional linear system with constant coefficients is algebraically equivalent to a linear controllable 1-D system obtained by setting all but one functional operator to 0 in the system matrix. In particular, the algebraic equivalence we use is the natural equivalence developed in module theory, namely, two multidimensional linear systems are said to be algebraically equivalent if their canonical associated modules are isomorphic over the underlying commutative polynomial ring of functional operators D. This equivalence is nothing else than the natural substitute to the Lie-Bäcklund equivalence for multidimensional linear systems. In the case of ordinary differential linear systems, we already know that Lie-Bäcklund transformations correspond to isomorphisms of the underlying modules (see e.g. [17] and the references therein). Finally, we prove that a flat differential time-delay linear system is algebraically equivalent to the controllable ordinary differential system without delays, namely, the system obtained by setting all the delay amplitudes to 0. This last system plays a similar role as the one played by the Brunovský canoncial form in the non-linear case.

We have the following corollary of Theorem 8.

Corollary 7

Let D=k[x 1 ,...,x n ], RD q×p be a full row rank and F an injective cogenerator D-module. The flat multidimensional system ker F (R(x 1 ,...,x n ).) is then D-isomorphic to a controllable 1-D linear system obtained by setting any functional operator to 0. For instance, the system ker F (R(x 1 ,...,x n ).) is D-isomorphic to the system ker F (R(x 1 ,0,...,0).) and the F-solutions of ker F (R(x 1 ,...,x n ).) are in a one-to-one correspondence with the ones of ker F (R(x 1 ,0,...,0).).

Proof

Using Proposition 3, we obtain that M=D 1×p /(D 1×q R) is a free D-module. Using the fact that R has full row rank, by Theorem 8, there exists a matrix U GL p (D) such that RU=R ¯, where R ¯=R(x 1 ,0,...,0). Therefore, we have the following commutative exact diagram

0000D 1×q .RD 1×p πM0.Uf0D 1×q .R ¯D 1×p κM 0,000

where κ:D 1×p M denotes the canonical projection onto M and the D-isomorphism f:MM is defined by:

m=π(λ),λD 1×p ,f(m)=κ(λU).

Applying the functor hom D (·,F) to the previous commutative exact diagram and using the fact that horizontal exact sequences split because MM is a free D-module, we then obtain the following commutative exact diagram:

0000F q R.F p π ker F (R.)0U.f 0F q R ¯.F p κ ker F (R ¯.)0.000

The D-isomorphism f :ker F (R ¯.)ker F (R.) defined by:

ηker F (R.),f (η)=Uη.

Hence, f induces a one-to-one correspondence between the trajectories of ker F (R ¯.) and those of ker F (R.) and (f ) 1 is defined by:

ζker F (R.),(f ) 1 (ζ)=U 1 ζ.

Using Corollary 2 and the end of the Section 3.4, we can always reduce the case of a non full row rank matrix R to the case of a full row rank matrix R and then apply Corollary 7 to R .

Despite the fact that Corollary 7 is a straightforward consequence of the Quillen-Suslin theorem, its applications to flat multidimensional systems seem to be ignored. In particular, it shows that the Lie-Bäcklund equivalence in the non-linear case needs to be replaced by the isomorphism equivalence in the multidimensional case. Moreover, the right substitute of the Brunovský linear system in the non-linear case becomes the controllable 1-D linear linear system with constant coefficients obtained by setting all but one functional operator to 0.

Let us illustrate Corollary 7 on two examples.

Example 8

Let us consider again the differential time-delay linear system defined by (20). In Example 7, we proved that the corresponding D-module M is free. It is well-known that F=C (R) is not an injective D-module but, by Remark 8, the system ker F (R.) is flat as the D-module M is free. Hence, according to Corollary 7, the flat system (20) is algebraically equivalent to the following controllable ordinary differential linear system

z ˙ 1 (t)+2z 1 (t)+2z 2 (t)=0,z ˙ 1 (t)+z ˙ 2 (t)v(t)=0,(23)

i.e., the system obtained by setting δ to 0 in the matrix R. Using the constructive QS-algorithm to R, after a few computations, we obtain an invertible transformation which bijectively maps the trajectories of (20) to the ones of (23) is defined by:

y 1 (t)=z 1 (t),y 2 (t)=1 2(z ˙ 1 (t2h)+z 1 (th))+z 2 (t)+v(th),u(t)=1 2z ˙ 1 (th)+v(t).z 1 (t)=y 1 (t),z 2 (t)=1 2y 1 (th)+y 2 (t)u(th),v(t)=1 2y ˙ 1 (th)+u(t).(24)

Applying again Corollary 7 to (23), we get that the ordinary differential system (23) is equivalent to the purely algebraic system

2x 1 (t)+2x 2 (t)=0,w(t)=0,(25)

i.e., the system obtained by setting to δ and d/dt to 0 in R. Applying a QS-algorithm to R,we obtain that a transformation which bijectively maps the trajectories of (23) to the ones of (25) is defined by:

z 1 (t)=x 1 (t),z 2 (t)=x 2 (t)1 2x ˙ 1 (t),v(t)=w(t)1 2x ¨ 1 (t)+x ˙ 1 (t)+x ˙ 2 (t),x 1 (t)=z 1 (t),x 2 (t)=z 2 (t)+1 2z ˙ 1 (t),w(t)=v(t)+z ˙ 1 (t)+z ˙ 2 (t).(26)

Combining (24) and (26), we finally obtain a one-to-one correspondence between the solutions of (20) and (25).

We note that the solutions of (20) (resp., (23)) are parametrized by means of (24) (resp., (26)), where z 1 , z 2 and v (resp., x 1 , x 2 and w) are not arbitrary functions as they must satisfy (23) (resp., (25)). However, solving the algebraic system (25), we obtain that x 2 =x 1 and w=0. Substituting these values in (26) and the result into (24), we find that an injective parametrization of (20) is defined by (22).

Finally, we can check that an injective parametrization of (23) is obtained by setting δ=0 in the matrix of operators defining (22), i.e.:

ψF,z 1 (t)=ψ(t),z 2 (t)=1 2(ψ ¨(t)+2ψ(t)),v(t)=1 2ψ ¨(t).

Similarly, if we set δ and d/dt to 0 in the matrix of operators defining (22), we obtain the following injective parametrization of (25):

φF,x 1 (t)=φ(t),x 2 (t)=φ(t),w(t)=0.

These results can be obtained by applying the functor (D/(Dδ)) D · (resp., (D/Dδ+Dd dt) D ·) to the split exact sequence (21) to get the corresponding split exact sequence of D/(Dδ)-modules (resp., D/Dδ+Dd dt-modules) ([57]).

We consider another time-delay system appearing in the literature of control theory.

Example 9

Let us consider the differential time-delay system of neutral type studied in [28], where a denotes a real constant:

x ˙ 1 (t)+x 1 (t)u(t)=0,x ˙ 2 (t)x ˙ 2 (th)x 1 (t)+ax 2 (t)=0.(27)

We consider the ring D=Q(a)d dt,δ, the system matrix of (27) defined by

R=d dt+1011d dtd dtδ+a0D 2×3 ,

and the D-module M=D 1×3 /(D 1×2 R). R admits a right-inverse defined by

S=01001d dt1D 2×3 ,

a fact which proves that M is a projective, and thus, a free D-module by the Quillen-Suslin theorem. Even if the D-module F=C (R) is injective, by Remark 8, the fact that D-module M is free is a sufficient condition for ker F (R.) to be a flat system. By Corollary 7, (27) is equivalent to the following ordinary differential system

z ˙ 1 (t)+z 1 (t)v(t)=0,z ˙ 2 (t)+az 2 (t)z 1 (t)=0,(28)

i.e., the system obtained by setting δ to 0 in the matrix R, under the corresponding invertible transformations:

x 1 (t)=z 1 (t)z ˙ 2 (th),x 2 (t)=z 2 (t),u(t)=v(t)z ¨ 2 (th)z ˙ 2 (th),z 1 (t)=x 1 (t)+x ˙ 2 (th),z 2 (t)=x 2 (t),v(t)=u(t)+x ¨ 2 (th)+x ˙ 2 (th).

Hence, the smooth solutions of the differential time-delay system (27) are in one-to-one correspondence with the ones of the ordinary differential system (28).

Using Corollary 5, we can also set the different functional operators appearing in the system matrix of a flat multidimensional linear system to any particular value belonging to k. Applying this result to the class of flat differential time-delay linear systems, we show that a flat differential time-delay linear system is equivalent to the controllable ordinary differential linear system obtained by setting all the time-delay amplitudes to 0, i.e., to the corresponding ordinary differential system without delays.

Corollary 8

Let D=kd dt,δ 1 ,...,δ n1 be the ring of differential incommensurable time-delay operators, namely, the amplitudes h i R + of the time-delay operator

(δ i y)(t)=y(th i ),i=1,...,n1,

are such that the Q-vector space generated by the positive real numbers h 1 ,...,h n1 is n-dimensional. Let us consider RD q×p which admits a right-inverse over D and F an injective cogenerator D-module. Then, the time-invariant flat differential time-delay linear system ker F (Rd dt,δ 1 ,...,δ n1 .) is D-isomorphic to the controllable ordinary differential linear system ker F (Rd dt,0,...,0.) obtained by setting the amplitudes of all the delays to 0, i.e., it is equivalent to the linear system without delays. In particular, the F-solutions of the system ker F (Rd dt,δ 1 ,...,δ n1 .) is in a one-to-one correspondence with the ones of ker F (Rd dt,0,...,0.).

Let us illustrate Corollary 8 on two examples.

Example 10

Let us consider again the flat differential time-delay linear system defined by (20). Applying Corollary 8 on (20), we obtain that (20) is equivalent to the ordinary differential linear system obtained by substituting h=0 into (20), i.e., by setting δ=1 in the matrix R defined in Example 7, namely:

z ˙ 1 (t)+z 1 (t)+2z 2 (t)2v(t)=0,z ˙ 1 (t)+z ˙ 2 (t)v ˙(t)v(t)=0.(29)

Using a QS-algorithm, we then obtain that the following transformation

z 1 (t)=y 1 (t),z 2 (t)=1 2(y ˙ 1 (t)y ˙ 1 (th)+y 1 (t)y 1 (th))+y 2 (t)+u(t)u(th),v(t)=1 2(y ˙ 1 (t)y ˙ 1 (th))+u(t),(30)

whose inverse is defined by

y 1 (t)=z 1 (t),y 2 (t)=1 2(z ˙ 1 (th)z ˙ 1 (t2h)+z 1 (th)z 1 (t))+z 2 (t)+v(th)v(t),u(t)=1 2(z ˙ 1 (th)z ˙ 1 (t))+v(t),

bijectively maps the trajectories of (20) to the ones of (29). An injective parametrization of (29) can then be obtained by taking h=0 in (22), i.e.:

ψF,z 1 (t)=ψ(t),z 2 (t)=1 2(ψ ¨(t)+ψ(t)),v(t)=1 2(ψ ¨(t)+ψ ˙(t)).

Example 11

We consider again the differential time-delay system of neutral type defined by (27). As we have already proved that (27) is a flat system, by Corollary 8, we know that (27) is equivalent to the ordinary differential linear system

z ˙ 1 (t)+z 1 (t)v(t)=0,z 1 (t)+az 2 (t)=0,(31)

obtained by setting h=0 in (27) . Using a QS-algorithm, we then obtain that the invertible transformation defined by

x 1 (t)=z 1 (t)+z ˙ 2 (t)z ˙ 2 (th),x 2 (t)=z 2 (t),u(t)=v(t)+z ¨ 2 (t)z ¨ 2 (th)+z ˙ 2 (t)z ˙ 2 (th),

z 1 (t)=x 1 (t)x ˙ 2 (t)+x ˙ 2 (th),z 2 (t)=x 2 (t),v(t)=u(t)x ¨ 2 (t)+x ¨ 2 (th)x ˙ 2 (t)+x ˙ 2 (th),

bijectively maps the trajectories of (27) to the ones of (31).

In the previous examples, we note that the invertible transformations can easily be computed by hand but it is generally not the case for more complicated examples. Hence, we need to use an implementation of constructive versions of the Quillen-Suslin theorem for computing the invertible transformations and the injective parametrizations of flat multidimensional linear systems. Such an implementation has recently been done in the package QuillenSuslin ([13]) which, with the library OreModules ([4]), allows us to effectively handle these difficult computations.

As for the flat non-linear ordinary differential systems, using the fact that there is a one-to-one correspondence between the trajectories of the flat differential time-delay systems with those of the ordinary differential system without delays, we can use stabilizing controllers of the latter in order to stabilize the former. This approach echoes the Smith predictor method. We illustrate this idea on an explicit example. More general ones can be handled in a similar way or will be studied in a future publication.

Example 12

The differential time-delay linear system defined by

x ˙(t)+x(th)=u(t)(32)

is flat as we have the following injective parametrization of (32):

x(t)=y(t),u(t)=y ˙(t)+y(th).

We easily check that (32) is algebraically equivalent to the controllable ordinary differential system obtained by setting h=0 in (32), namely,

z ˙(t)+z(t)=v(t)(33)

under the following invertible transformation:

x(t)=z(t),u(t)=v(t)(z(t)z(th)),z(t)=x(t),v(t)=u(t)+(x(t)x(th)).(34)

The transfer functions of (32) and (33) are then defined by:

p 1 =1 (s+e hs ),p 2 =1 (s+1).

Let us show how to use the invertible transformation (34) in order to parametrize all the stabilizing controllers of p 1 by means of the ones of p 2 . Let us consider the algebra A=RH of proper and stable real rational transfer functions and the Hardy algebra B=H (C + ) of bounded analytic functions in the right half-plane C + ([7], [50], [52], [51], [60]). We recall that A is a R-subalgebra of B. As p 2 A, Zames´ parametrization of all stabilizing controllers of p 2 has the form ([51], [60]):

qA,c 2 (q)=q 1+qp 2 .

Now, using the Laplace transform of (34) ([7]), we get

z ^=x ^,v ^=u ^+(1e hs )x ^,

where z ^ denotes the Laplace transform of z and similarly for x, u and v. Using the fact that v ^=c 2 (q)z ^, we obtain the following stabilizing controllers of p 1 :

qA,u ^=(1e hs c 2 (q))x ^.

Let us check that the controller c 1 (q)=(1e hs c 2 (q)) internally stabilizes p 1 :

1 1p 1 c 1 (q)=s+e hs s+1c 2 (q)=(s+e hs ) (s+1)1 1c 2 (q) (s+1),p 1 1p 1 c 1 (q)=1 s+1c 2 (q)=1 (s+1)1 1c 2 (q) (s+1),c 1 (q) 1p 1 c 1 (q)=(s+e hs ) (s+1)1 1c 2 (q) (s+1)(1e hs c 2 (q)),=(s+e hs ) (s+1)1e hs 1c 2 (q) (s+1)c 2 (q) 1c 2 (q) (s+1).

Then, using the fact that for all qA, we have

1 1c 2 (q) (s+1),c 2 (q) 1c 2 (q) (s+1)A,

as c 2 (q) internally stabilizes p 2 and (s+e hs )/(s+1),1e hs B, we obtain

qA,1 (1p 1 c 1 (q)),p 1 (1p 1 c 1 (q)),c 1 (q) (1p 1 c 1 (q))B,

which shows that c 1 (q) internally stabilizes p 1 for all qA. For more details, see [7], [50], [52], [51], [60]. Following [51], we can then find the general Q-parametrization of all stabilizing controllers of p 1 .

Taking q=0, the internal stabilizing controller c 1 (0)=(1e hs ) of p 1 , i.e.,

u(t)=x(t)+x(th),(35)

L 2 (R + )L 2 (R + )-stabilizes (32). See [7] for more details. We note that a similar result holds if we consider the Wiener algebra A ^ ([7], [51], [60]) instead of B=H (C + ). Hence, the controller defined by (35) also L (R + )L (R + )-stabilizes (32).

Finally, using some results of [51] and the fact that c 1 (0)B, we obtain that p admits the following coprime factorization p=n/d

n=p 1 (0) (1p 1 c 1 (0))=1 (s+1)B,d=1 (1p 1 c 1 (0))=(s+e hs ) (s+1)B,

as we can easily check that the following Bézout identity holds:

(s+e hs ) (s+1)(e hs 1)1 (s+1)=1.

In particular, the stable controller c 1 (0)=(1e hs ) strongly stabilizes p 1 ([51], [60]).

5. Pommaret´s theorem of Lin-Bose´s conjecture

The purpose of this section is to show how to use a QS-algorithm to constructively solve Pommaret´s theorem of Lin-Bose´s conjecture ([43]). Let us first recall this conjecture recently developed in the multidimensional systems theory which generalizes Serre´s conjecture ([26]). Let us state a new problem.

Problem 3

Let D=k[x 1 ,...,x n ] be a commutative polynomial ring with coefficients in a field k, RD q×p a full row rank matrix and M=D 1×p /(D 1×q R) the D-module finitely presented by R. We suppose that M/t(M) is a free D-module.

Does it exist a full row rank matrix R D q×p satisfying M/t(M)=D 1×p /(D 1×q R )? If so, compute such a matrix R .

If we can solve Problem 3, we then have

t(M)=(D 1×q R )/(D 1×q R),

and using the fact that D 1×q RD 1×q R , there exists R D q×q such that:

R=R R .(36)

Let us denote by r=p!/((pq)!q!). The fact that M/t(M) is a projective D-module implies that there is no common zero in the q×q minors {m i } 1ir of R , i.e., there exists a family {p i } 1ir of elements of D satisfying the following Bézout identity:

i=1 r p i m i =1.(37)

Now, using the fact that we have m i =(detR )m i , for i=1,...,r, where the m i denote the q×q-minors of R, we obtain that the following inclusion of ideals of D:

i=1 r Dm i (D(detR )) i=1 r Dm i =D(detR ).

Multiplying (37) by detR , we obtain

detR = i=1 r p i (detR )m i = i=1 r p i m i ,

which shows that D(detR ) i=1 r Dm i and i=1 r Dm i =D(detR ). Hence, the greatest common divisor of the q×q minors {m i } 1ir is then equal to detR .

Hence, solving Problem 3 gives us a way to factorize R under the form R=R R , where R D q×p admits a right-inverse over D and detR is the greatest common divisor of the q×q minors of R. The question of the possibility to achieve this factorization was first asked by Lin and Bose in [26] and solved by Pommaret in [43]. See also [63]. It was proved in [43] that this factorization problem is equivalent to Problem 3. The purpose of this paragraph is to give a general constructive algorithm which solves Problem 3, and thus, performs the corresponding factorization. The algorithm has recently been implemented in the package QuillenSuslin. See the Appendix.

Based on the Quillen-Suslin theorem, we first prove that a matrix R satisfying Problem 3 always exists. We then show how to effectively compute it.

The fact that R has full row rank implies that we have the following exact sequence:

0D 1×q .RD 1×p πM0.(38)

Let N=D 1×q /(D 1×p R T ) be the transposed D-module of M (see Remark 1), according to Theorem 3, there exists QD q ×p such that M/t(M)=D 1×p /(D 1×q Q). In particular, using the fact that (D 1×q R)(D 1×q Q), there then exists a matrix PD q×q satisfying R=PQ. We refer the reader to [4] for the implementation of the corresponding algorithms in the library OreModules as well as the large library of examples which demonstrates these results.

Then, we have the following commutative exact diagram:

00t(M)i0D 1×q .RD 1×p πM0.PρD 1×q .QD 1×p π M/t(M)0.00

As, by hypothesis, the D-module M/t(M) is projective, using 1 of Proposition 1, we obtain that the following exact sequence

0D 1×q QD 1×p π M/t(M)0(39)

splits and we obtain

D 1×p M/t(M)(D 1×q Q),

which shows that D 1×q Q is a projective D-module. By the Quillen-Suslin theorem, we obtain that D 1×q Q is then a free D-module.

Let us compute the rank of the free D-module D 1×q Q. Applying the exact functor K D · to the short exact sequence (39), where K=Q(D) denotes the quotient field of D ([57]), we obtain that:

rank D (D 1×q Q)=p rank D ((M/t(M)).

See [57] for more details (Euler characteristic). Similarly with the two short exact sequences (38) and

0t(M) iM ρM/t(M)0,

and, using the fact that K D t(M)=0 because t(M) is a torsion D-module ([57]), we then get:

rank D (M/t(M))= rank D (M)=pq.

Therefore, we obtain rank D (D 1×q Q)=p(pq)=q, which shows that D 1×q Q is a free D-module of rank q, i.e., D 1×q QD 1×q . Computing a basis of this free D-module, we obtain a full row rank matrix R D q×p satisfying

D 1×q Q=D 1×q R ,(40)

which implies that M/t(M)=D 1×p /(D 1×q R ) and we have the following finite free short resolution of M/t(M):

0D 1×q .R D 1×p π M/t(M)0.(41)

We note that if Q has full row rank, we then can take R =Q and q =q.

In order to compute the matrix R D q×p which satisfies (40), we need to compute a basis of the free D-module D 1×q Q. Hence, we can use the first point of Remark 7 to compute a basis of the D-module D 1×q Q.

Algorithm 2

  1. Transpose the matrix R and define the finitely presented D-module:

    N=D 1×q /(D 1×p R T ).

  2. Compute the D-module ext D 1 (N,D). We obtain a matrix QD q ×p such that:

    M/t(M)=D 1×p /(D 1×q Q).

  3. Compute the first syzygy module ker D (.Q) of D 1×q Q.

  4. If ker D (.Q)=0, then Q has full row rank and exit the algorithm with R =Q. Else, denote by Q 2 D q 2 ×q a matrix satisfying ker D (.Q)=D 1×q 2 Q 2 .

  5. Compute a basis of the free D-module:

    L=D 1×q /(D 1×q 2 Q 2 ).

    In particular, we obtain a full row rank matrix BD q×q such that L=π 2 (D 1×q B), where π 2 :D 1×q L denotes the canonical projection on L.

  6. Return the full row rank matrix R =BQD q×p .

Remark 9

The computation of a basis of L gives two matrices P 2 D q ×q and BD q×q such that we have the following split exact sequence

0D 1×q 2 .Q 2 D 1×q π 2 L0ϕD 1×q .P 2 D 1×q 0, .B0

where ϕ:D 1×q L denotes the corresponding isomorphism. We can now check that the matrix R =BQ has full row rank. Let λD 1×q be such that λR =0. Then, we get (λB)Q=0, i.e., λBker D (.Q)=D 1×q 2 Q 2 , and thus, there exists μD 1×q 2 such that λB=μQ 2 . Using the identity BP 2 =I q , we then obtain:

λ=(λB)P 2 =μ(Q 2 P 2 )=0.

We illustrate Algorithm 2 on a simple example.

Example 13

Let us consider the differential time-delay model of a flexible rod with a force applied on one end developed in [32]:

y ˙ 1 (t)y ˙ 2 (t1)u(t)=0,2y ˙ 1 (t1)y ˙ 2 (t)y ˙ 2 (t2)=0.(42)

Let us define the ring D=Qd dt,δ of differential time-delay operators with rational coefficients. The system matrix of (42) is defined by:

R=d dtd dtδ12d dtδd dtδ 2 d dt0D 2×3 .

Let M=D 1×3 /(D 1×2 R) be the D-module associated with (42) and D-module N=D 1×2 /(D 1×3 R T ). Then, N admits the following finite free resolution

0N σD 1×2 .R T D 1×3 .R 2 T D0,

where R 2 T =δ 2 12δd dtδ 2 d dt. The defects of exactness of the complex

0D 1×2 .RD 1×3 .R 2 D0

are then defined by:

ext D 0 (N,D)=ker D (.R)=0, ext D 1 (N,D)=ker D (.R 2 )/(D 1×2 R), ext D 2 (N,D)=D/(D 1×3 R 2 ).

Computing the first syzygy module ker D (.R 2 ) of D 1×2 R, we obtain ker D (.R 2 )=D 1×3 Q, where the matrix Q is defined by:

Q=2δδ 2 +10d dtd dtδ1d dtδd dtδD 3×3 .(43)

We get t(M)(D 1×3 Q)/(D 1×2 R) and reducing the rows of Q with respect to D 1×2 R, we obtain that the only non-trivial torsion element of M is defined by

m=2δy 1 +(δ 2 +1)y 2 ,d dtm=0,

where y 1 , y 2 and y 3 denote the residue classes of the standard basis of D 1×3 in M.

Following Algorithm 2, we compute the first syzygy module ker D (.Q) and obtain ker D (.Q)=DQ 2 , where:

Q 2 =d dtδ1D 1×3 .(44)

We now have to compute a basis of the free D-module L=D 1×3 /(DQ 2 ). Using a constructive version of the Quillen-Suslin theorem, we obtain the split exact sequence

0D .Q 2 D 1×2 .P 2 D0 .S 2 .B

with the following notations:

S 2 =(001) T ,P 2 =1001d dtδ,B=100010.

Computing R =BQ, we obtain that the following full row rank matrix

R =2δδ 2 10d dtd dtδ1D 2×3

satisfies D 1×3 Q=D 1×2 R . Finally, we have the factorization R=R R , where the R is defined by

R =01d dt0,

and satisfies detR =d/dt, where d/dt is the greatest common divisor of the 2×2 minors of R and is the functional operator which annihilates the torsion element m.

Using the fact that M/t(M) is a free D-module of rank pq, i.e., there exists an isomorphism

ψ:M/t(M)D 1×(pq) ,

and the exact sequence (41), we then obtain the following exact sequence

0D 1×q .R D 1×p .PD 1×(pq) 0,(45)

where PD p×(pq) is the matrix defining the morphism π ψ in the standard bases of D 1×p and D 1×(pq) . As the exact sequence (45) ends with a free D-module, by 1 of Proposition 1, it splits, i.e., there exist SD p×q and TD (pq)×p such that we have the following Bézout identities:

R T(SP)=I q 00I pq =I p ,(46)
(SP)R T=I p .(47)

Now, we have

RT=R R T=R 00I pq R T

and using (46), we obtain that det((R T T T ) T )=1 and:

detRT=detR 00I pq detR T=detR .

Finally, using the fact that we have proved that detR is the greatest common divisor of the q×q minors of the matrix R, we then have solved the following problem.

Problem 4

Let RD q×p be a full row rank matrix such that the ideal i=1 r Dm i of D generated by the q×q minors {m i } 1ir of the matrix R satisfies

i=1 r Dm i =Dd,

where d denotes the greatest common divisor of the q×q minors of the matrix R.

Find a matrix TD (pq)×p such that we have:

detRT=d.

To our knowledge, such a problem was first stated by Bose and Lin in [26]. Let us give a constructive algorithm solving Problem 4.

Algorithm 3

  1. Transpose the matrix R and define the finitely presented D-module:

    N=D 1×q /(D 1×p R T ).

  2. Compute the D-module ext D 1 (N,D). We obtain a matrix QD q ×p such that:

    M/t(M)=D 1×p /(D 1×q Q).

  3. Compute a basis of the free D-module M/t(M)=D 1×p /(D 1×q Q). We obtain a full row rank matrix TD (pq)×p such that M/t(M)=π (D 1×(pq) T), where π :D 1×p M/t(M) denotes the canonical projection on M/t(M).

  4. Return the matrix U=(R T T T ) T which satisfies detU=d.

We illustrate Algorithm 3 on an example.

Example 14

We consider again the model of a flexible rod defined in (42). In Example 13, we have proved that M/t(M)=D 1×3 /(D 1×3 Q), where the matrix Q is defined by (43). Let us compute a basis of the free D-module M/t(M). The D-module M/t(M) admits the following free resolution

0D .Q 2 D 1×3 .QD 1×3 π M/t(M)0,

where Q 2 is defined by (44). Using the fact that Q 2 admits the right-inverse S 2 defined by (13), we obtain the following minimal free resolution of M/t(M)

0D 1×3 .Q ¯D 1×4 π 0M/t(M)0,

where the full row rank matrix Q ¯ is defined by Q ¯=(Q T S 2 T ) T .

Applying a constructive version of the Quillen-Suslin theorem to Q ¯, we then find that a basis of M/t(M) is given by (π 0)(T ¯), where T ¯ denotes the matrix:

T ¯=11 2δ00.

If we denote by T the matrix defined by the first three entries of T ¯, we then obtain a square matrix U=(R T T T ) T satisfying detU=d/dt.

The explicit computation of the D-module ext D 1 (N,D) gives a matrix R 1 D p×m which satisfies ker D (.R 1 )=D 1×q Q, i.e., such that we have the following exact sequence:

D 1×q .QD 1×p .R 1 D 1×m .

A direct way to solve Problem 4 exists when the matrix R 1 admits a left-inverse S 1 D m×p . Then, we have M/t(M)D 1×p R 1 =D 1×m and using the fact that rank D (M/t(M))=pq, we get m=pq. The fact that D 1×q Q is a free D-module of rank q implies that there exists a full row rank matrix R D q×p satisfying D 1×q Q=D 1×q R . Combining this result with the previous exact sequence, we obtain the split exact sequence

0D 1×q .R D 1×p .R 1 D 1×(pq) 0,

which shows that P=R 1 and T=S 1 solve Problem 4.

Let us illustrate this last remark on an example.

Example 15

Let us consider again the model of a flexible rod defined in (42) and let us compute TD 1×3 such that the determinant of the matrix (R T T T ) T equals d/dt. In Example 13, we proved that we have the exact sequence

D 1×3 .QD 1×3 .R 2 D,

where R 2 =δ 2 12δd dtδ 2 d dt T . R 2 admits a left-inverse T defined by

T=11 2δ0,

which proves that M/t(M) is a free D-module of rank 1 as we have the isomorphisms:

M/t(M)=D 1×3 /(D 1×3 Q)(D 1×3 R 2 )D.

We finally obtain that the matrix defined by

U=RT=d dtd dtδ12d dtδd dtδ 2 d dt011 2δ0

satisfies detU=d/dt, which solves Problem 4.

To finish, let us show how to handle an example given in [64] by means of Algorithms 2 and 3.

Example 16

Let us consider the commutative polynomial ring D=Q[z 1 ,z 2 ,z 3 ] and the following matrix defined in [64]:

R=z 1 z 2 2 z 3 0z 1 2 z 2 2 1z 1 2 z 3 2 +z 3 z 3 z 1 3 z 3 z 1 D 2×3 .

Let us define the D-modules M=D 1×3 /(D 1×2 R) and N=D 1×2 /(D 1×3 R T ). Computing ext D 1 (N,D), we then get

t(M)=(D 1×4 Q)/(D 1×2 R),M/t(M)=D 1×3 /(D 1×4 Q),M/t(M)(D 1×3 P),

with the notations:

Q=z 2 2 z 3 z 2 2 z 3 z 1 z 2 2 z 1 z 3 z 3 z 1 2 z 3 2 z 3 z 1 +z 1 3 z 3 z 1 2 z 3 1z 1 2 z 2 2 +100z 1 z 2 2 z 3 z 1 2 z 3 1,P=z 1 2 z 2 2 +1z 1 2 z 3 +1z 1 z 2 2 z 3 .(48)

Reducing the rows of Q with respect to the rows of R, we obtain that the only torsion element of M is defined by

m=(z 1 2 z 3 +1)y 1 +(z 1 2 z 2 2 +1)y 2 ,z 3 m=0,

where y 1 , y 2 and y 3 denote the residue classes of the standard basis of D 1×3 in M. We refer the reader to [4] for more details concerning the explicit computations.

We can easily check that P admits the left-inverse T=(z 1 2 z 3 1z 1 3 ), a fact showing that M/t(M) is a free D-module of rank 2. Then, the matrix U=(R T T T ) T defined by

U=z 1 z 2 2 z 3 0z 1 2 z 2 2 1z 1 2 z 3 2 +z 3 z 3 z 1 3 z 3 z 1 z 1 2 z 3 1z 1 3

satisfies detU=z 3 , which solves Problem 4.

Let us solve Problem 3. From the previous result, we know that ker D (.P)=D 1×4 Q is a free D-module of rank 2. In order to be able to apply a constructive version of the Quillen-Suslin theorem, we first need to compute the first syzygy module of D 1×4 Q. We obtain that ker D (.Q)=D 1×2 Q 2 , where the matrix Q 2 D 2×4 is defined by:

Q 2 =z 1 2 z 3 +1z 3 z 2 2 z 3 2 001z 3 z 1 .

Hence, we have D 1×4 QL=D 1×4 /(D 1×2 Q 2 ). Applying a constructive version of the Quillen-Suslin theorem to Q 2 , we then obtain L=π 2 (D 1×2 B), where the full row rank matrix B is defined by

B=z 1 4 0z 1 2 z 3 +100z 1 3 z 3 (z 2 2 z 3 )01,

and π 2 :D 1×2 L denotes the canonical projection onto L. Hence, we get that the full row rank matrix defined by

R =BQ=R 11 R 12 R 13 R 21 R 21 R 23 D 2×3 ,

where

R 11 =z 1 4 z 2 2 z 3 +z 1 4 z 3 2 1,R 12 =z 1 2 z 2 2 z 1 2 z 3 +1,R 13 =z 1 5 (z 2 2 z 3 ),R 21 =z 1 3 z 3 2 (z 2 2 z 3 )(z 1 2 z 3 +1),R 21 =z 1 3 z 3 3 +z 1 3 z 2 2 +z 1 z 2 2 z 3 ,R 23 =z 1 4 z 3 2 z 1 6 z 3 2 +z 1 4 z 2 2 z 3 +z 1 6 z 2 2 z 3 2 z 1 2 z 3 1,

satisfies D 1×4 Q=D 1×2 R and the two independent rows of R define a basis of D 1×4 Q. Finally, we obtain that R=R R , where the matrix R is defined by

R =z 1 z 2 2 z 3 z 1 3 z 2 2 z 3 2 +z 1 3 z 3 3 z 1 2 z 2 2 z 1 2 z 3 +1z 1 2 z 3 2 z 3 z 1

and detR =z 3 , which solves Problem 3.

We note that we can use the fact that P has a full column rank in order to also solve Problem 3. Indeed, we can use a constructive version of the Quillen-Suslin theorem to compute a basis of ker D (.P). Indeed, if we transpose the column vector P, we then obtain the row vector defined in Example 4. Hence, if we take the last two rows of U T , where U is the unimodular matrix defined in (13), we obtain that the full row rank R 2 defined by

R 2 =1+z 1 4 z 2 2 z 3 +z 1 2 z 3 z 1 2 z 2 2 1z 1 3 (z 1 2 z 2 2 +1)z 1 3 z 3 2 z 2 2 z 1 z 2 2 z 3 z 1 4 z 2 2 z 3 +1,(49)

satisfies D 1×4 Q=D 1×2 R 2 and we obtain the factorization R=R 2 R 2 , where:

R 2 =z 1 z 2 2 z 3 z 1 2 z 2 2 1z 3 z 1 ,detR 2 =z 3 .

6. Computation of (weakly) doubly coprime factorizations of rational transfer matrices

We now turn to another application of the constructive proofs of the Quillen-Suslin theorem in multidimensional systems theory, namely, the problem of finding (weakly) left-/right-/doubly coprime factorizations of rational transfer matrices over the commutative polynomial ring k[x 1 ,...,x n ] with coefficients in a field k. The general problem of the existence of (weakly) left-/right-/doubly coprime factorizations for general linear systems was recently studied and solved in [50], [52].

Let us recall a few definitions.

Definition 8 ([50])

Let D be a commutative integral domain, its quotient field

K={n/d|0d,nD},

and PK q×r a transfer matrix.

  1. A fractional representation of P is a representation of P of the form

    P=D P N P 1 =N ˜ P D ˜ P 1 ,

    where

    R=(D P N P )D q×(q+r) ,R ˜=N ˜ P D ˜ P D (q+r)×r ,(50)

    i.e., the entries of the matrices R and R ˜ belong to the ring D.

  2. A fractional representation P=D P 1 N P of P is called a weakly left-coprime factorization of P if we have:

    λK 1×q :λRD 1×(q+r) λD 1×q .

  3. A fractional representation P=N ˜ P D ˜ P 1 is called a weakly right-coprime factorization of P if we have:

    λK r :R ˜λD (q+r)×1 λD r×1 .

  4. A fractional representation P=D P 1 N P =N ˜ P D ˜ P 1 is called a weakly doubly coprime factorization of P if P=D P 1 N P is a weakly left-coprime factorization of P and P=N ˜ P D ˜ P 1 is a weakly right-coprime factorization of P.

  5. A fractional representation P=D P 1 N P of P is called a left-coprime factorization of P if the matrix R admits a right-inverse over D, i.e., if there exists S=(X T Y T ) T D (q+r)×q satisfying:

    RS=D P XN P Y=I q .

  6. A fractional representation P=N ˜ P D ˜ P 1 of P is called a right-coprime factorization of P if the matrix R ˜ admits a left-inverse over D, namely, if there exists a matrix S ˜=(Y ˜X ˜)D r×(q+r) satisfying:

    S ˜R ˜=Y ˜N ˜ P +X ˜D ˜ P =I r .

  7. A fractional representation P=D P 1 N P =N ˜ P D ˜ P 1 is called a doubly coprime factorization of P if P=D P 1 N P is a left-coprime factorization of P and P=N ˜ P D ˜ P 1 is a right-coprime factorization of P.

In the case of a polynomial ring D=k[x 1 ,...,x n ], a weakly coprime factorization of a rational transfer matrix is also called a minor left-coprime factorization.

The next definition will play an important role in what follows.

Definition 9 ([50])

Let the matrix RD q×p have a full row rank. We call D-closure D 1×q R ¯ of the D-submodule D 1×q R of D 1×p the D-module defined by:

D 1×q R ¯={λD 1×p |0dD:dλD 1×q R}.

We have the following characterizations of the closure of a D-submodule of D 1×p .

Proposition 4 ([50])

Let RD q×p be a full row rank matrix and the finitely presented D-module M=D 1×p /(D 1×q R). We then have:

  1. D 1×q R ¯=(K 1×q R)D 1×p , where K denotes the quotient field of D.

  2. The following equalities hold:

    t(M)=((K 1×q R)D 1×p )/(D 1×q R),M/t(M)=D 1×p /((K 1×q R)D 1×p ).

The next theorem gives necessary and sufficient conditions for the existence of a (weakly) left-/right-/doubly coprime factorization of a transfer matrix.

Theorem 9 ([50])

Let PK q×r and P=D P 1 N P =N ˜ P D ˜ P 1 be a fractional representation of P, where the matrices R and R ˜ are defined by (50). Then, we have:

  1. P admits a weakly left-coprime factorization iff the D-module D 1×q R ¯ is free of rank q.

  2. P admits a weakly right-coprime factorization iff the D-module D 1×r R ˜ T ¯ is free of rank r.

  3. P admits a left-coprime factorization iff D 1×q R ¯ is a free D-module of rank q and the D-module D 1×(q+r) /(D 1×q R ¯) is stably free of rank r.

  4. P admits a right-coprime factorization iff D 1×r R ˜ T ¯ is a free D-module of rank r and the D-module D 1×(q+r) /(D 1×r R ˜ T ¯) is stably free of rank q.

  5. P admits a left-coprime factorization iff D 1×(q+r) /(D 1×r R ˜ T ¯) is a free D-module of rank q.

  6. P admits a right-coprime factorization iff D 1×(q+r) /(D 1×q R ¯) is a free D-module of rank r.

Testing the freeness of modules is a very difficult issue in algebra. Hence, using Theorem 9, we deduce that it is generally difficult to check whether or not a transfer matrix PK q×r admits a (weakly) left-/right-/doubly coprime factorization and if so, to compute them. See [50], [52] for results for D=H (C + ) or the ring of structural stable multidimensional systems.

However, if we consider the commutative polynomial ring D=k[x 1 ,...,x n ] over a field k and K=k(x 1 ,...,x n ) its quotient field, then we can use constructive versions of the Quillen-Suslin theorem in order to effectively compute (weakly) left-/right-/doubly coprime factorizations of a rational transfer matrix. We first note that using Proposition 4 and a computation of an extension module, we can explicitly compute the closure D 1×q R ¯ and then test whether the necessary and sufficient conditions given in Theorem 9 are fulfilled. The next algorithm gives a constructive way to compute the corresponding factorizations.

Algorithm 4

  1. Define the matrix R=(D P N P )D q×(q+r) and the following D-module:

    M=D 1×(q+r) /(D 1×q R).

  2. Transpose the matrix R and define the finitely presented D-module:

    N=D 1×q /(D 1×(q+r) R T ).

  3. Compute the D-module ext D 1 (N,D). We obtain a matrix QD q ×(q+r) such that:

    M/t(M)=D 1×(q+r) /(D 1×q Q).

  4. Compute a basis of the free D-module D 1×q R ¯=D 1×q Q. We obtain a full row rank matrix R D q×(q+r) such that D 1×q Q=D 1×q R .

  5. Write R =(D P N P ), where D P D q×q and N P D q×r . If detD P 0, then P admits the weakly left-coprime factorization P=(D P ) 1 N P .

Up to a transposition, weakly right-coprime factorizations can similarly be obtained.

Let us illustrate Algorithm 4 on an example.

Example 17

Let us consider the commutative polynomial ring D=Q[z 1 ,z 2 ,z 3 ], K=Q(z 1 ,z 2 ,z 3 ) the quotient field of D and the following rational transfer matrix:

P=z 1 2 z 2 2 +1 z 1 z 2 2 z 3 z 1 2 z 3 +1 z 1 z 2 2 z 3 K 2×1 .(51)

Let us check whether or not P admits a weakly left-coprime factorization and if so, let us compute one. We consider the fractional representation P=D P 1 N P of P obtained by cleaning the denominators of P, i.e., D P and N P D 2×1 are defined by:

D P =z 1 z 2 2 z 3 00z 1 z 2 2 z 3 D 2×2 ,N P =z 1 2 z 2 2 +1z 1 2 z 3 +1D 2×1 .

We denote by R=(D P N P )D 2×3 and define the finitely presented D-modules:

M=D 1×3 /(D 1×2 R),N=D 1×2 /(D 1×3 R T ).

Computing ext D 1 (N,D), we then obtain

t(M)=(D 1×4 Q)/(D 1×2 R),M/t(M)=D 1×3 /(D 1×4 Q),

where the matrix Q is defined by (48) in Example 16. Using the results obtained in Example 16, we get that the full row rank matrix R 2 D 2×3 defined by (49) satisfies D 1×4 Q=D 1×2 R 2 . Therefore, if we denote by

D P =1+z 1 4 z 2 2 z 3 +z 1 2 z 3 z 1 2 z 2 2 1z 1 3 z 3 2 z 2 2 z 1 z 2 2 z 3 ,N P =z 1 3 (z 1 2 z 2 2 +1)z 1 4 z 2 2 z 3 1,(52)

P=(D P ) 1 N P is then a weakly left-coprime factorization of P.

Finally, by construction, the D-module

M/t(M)=D 1×3 /(D 1×4 Q)=D 1×3 /(D 1×2 R 2 )

is torsion-free, and thus, by Theorem 3, we have ext D 1 (N ,D)=0 where N =D 1×2 /(D 1×3 (R 2 ) T ). Moreover, we can easily check that ext D 2 (N ,D)=0 and ext D 3 (N ,D)=0, which shows that M/t(M) is a projective, and thus, a free D-module by the Quillen-Suslin theorem. Hence, by 3 of Theorem 9, we obtain that P=(D P ) 1 N P is a left-coprime factorization of P. We find that the matrix R 2 admits the following right-inverse over D:

10z 1 2 z 3 z 1 3 01.

Therefore, we have the Bézout identity D P XN P Y=I 2 , where:

X=10z 1 2 z 3 z 1 3 ,Y=(01).

The next algorithm gives us a way to compute left-coprime factorizations of a transfer matrix. Up to a transposition, right-coprime factorizations can similarly be obtained.

Algorithm 5

  1. Define the matrix R ˜=(N ˜ P T D ˜ P T ) T D (q+r)×r and define the D-module:

    M ˜=D 1×(q+r) /(D 1×r R ˜ T ).

  2. Define the finitely presented D-module:

    N ˜=D 1×r /(D 1×(q+r) R ˜).

  3. Compute ext D 1 (N ˜,D). We obtain a matrix Q ˜ T D r ×(q+r) such that:

    M ˜/t(M ˜)=D 1×(q+r) /(D 1×r Q ˜ T ).

  4. Compute a basis of the free D-module M ˜/t(M ˜). We obtain a full column rank matrix

    L ˜ T =(D P N P ) T D (q+r)×q ,

    where D P D q×q and N P D q×r , such that we have the following split exact sequence:

    0D 1×q .L ˜ T D 1×(q+r) .Q ˜ T D 1×r .

  5. Transpose the matrix L ˜ T to obtain L ˜=(D P N P )D q×(q+r) . If detD P 0, then P=(D P ) 1 N P is a left-coprime factorization of P.

Let us illustrate Algorithm 5 on an example.

Example 18

We consider again Example 17 and the rational transfer matrix P defined by (51). We have the fractional representation P=N ˜ P D ˜ P 1 of P, where:

N ˜ P =z 1 2 z 2 2 +1z 1 2 z 3 +1D 2×2 ,D ˜ P =z 1 2 z 2 2 z 3 D.

Let us define the matrix R ˜=(N ˜ P T D ˜ P T ) T and the D-modules:

M ˜=D 1×(q+r) /(D 1×r R ˜ T ),N ˜=D 1×r /(D 1×(q+r) R ˜).

The row vector R ˜ T is exactly the one defined in Example 4. Hence, using the results obtained in Example 4, we obtain that the unimodular matrix U defined by (13) satisfies R ˜ T U=(100). Hence, selecting the last two columns of U and transposing the corresponding matrix, we then find again the matrix R 2 defined by (49). Hence, using Example 17, we obtain that P=(D P ) 1 N P is a left-coprime factorization of P, where the matrices D P and N P are defined by (52).

7. Decomposition of multidimensional linear systems

It was recently shown in [9] that the computation of bases of free modules plays a central role in the decomposition problem of multidimensional linear systems. We shall recall this problem as well as the main important results obtained in [9]. Let us first recall a few definitions and notations.

We shall denote by end D (M) the non-commutative ring of D-endomorphisms of the D-module M, i.e., the ring formed by the D-morphisms (namely, the D-linear maps) from M to M. Moreover, we recall that if f is a D-morphism from a D-module M to a D-module N, then coim f is the D-module defined by coim f=M/kerf, where kerf={mM|f(m)=0} is the kernel of f.

Let M be a finitely presented D-module, i.e., M is of the form M=D 1×p /(D 1×q R), where RD q×p , and let us denote by π:D 1×p M the canonical projection. We can easily prove that a D-endomorphism f of M is defined by f(m)=π(λP), where PD p×p is a matrix such that there exists QD q×q satisfying RP=QR, and λ is any element of D 1×p satisfying m=π(λ). See [9] for more details and for constructive algorithms which compute the pairs of matrices (P,Q) satisfying RP=QR. These algorithms have been implemented in the package Morphisms ([10]) of the library OreModules ([4]).

We have following results.

Theorem 10

([9]) Let RD q×p , M=D 1×p /(D 1×q R) and f end D (M) defined by PD p×p and QD q×q , i.e., RP=QR. If the D-modules

ker D (.P), coim D (.P),ker D (.Q), coim D (.Q),

are free of rank m, pm, l, ql, then there exist matrices U 1 D m×p , U 2 D (pm)×p , V 1 D l×q and V 2 D (ql)×q such that

U=(U 1 T U 2 T ) T GL p (D),V=(V 1 T V 2 T ) T GL q (D),

and

R ¯=VRU 1 =V 1 RW 1 0V 2 RW 1 V 2 RW 2 D q×p ,

where U 1 =(W 1 W 2 ), W 1 D p×m and W 2 D p×(pm) .

In particular, the full row rank matrix U 1 (resp., U 2 , V 1 , V 2 ) defines a basis of the free D-module ker D (.P) (resp., coim D (.P), ker D (.Q), coim D (.Q)), i.e., we have:

ker D (.P)=D 1×m U 1 , coim D (.P)=D 1×(pm) U 2 ,ker D (.Q)=D 1×l V 1 , coim D (.Q)=D 1×(ql) V 2 .

An important point in Theorem 10 is the computation of bases of the free D-modules ker D (.P), coim D (.P), ker D (.Q) and coim D (.Q), which can be solved by means of constructive versions of the Quillen-Suslin theorem and their implementations in computer algebra systems. In order to do that, we use the package QuillenSuslin described in the Appendix.

Let us illustrate Theorem 10 by means of an explicit example.

Example 19

Let us consider the system of partial differential equations defined by

σ t A +1 μ A σ V=0,(53)

where σ and μ are two constants. The previous system corresponds to the equations satisfied by the electromagnetic quadri-potential (A ,V) when it is assumed that the term t D can be neglected in the Maxwell equations. See [8] for more details. It seems that Maxwell was led to introduce the term t D in his famous equations for purely mathematical reasons. See [8] for more details.

Let us consider the ring D=Q[ t , 1 , 2 , 3 ] of differential operators in t =/t and i =/x i with rational coefficients, the system matrix of (53) defined by

R=σ t 1 μ( 2 2 + 3 2 )1 μ 1 2 1 μ 1 3 σ 1 1 μ 1 2 σ t 1 μ( 1 2 + 3 2 )1 μ 2 3 σ 2 1 μ 1 3 1 μ 2 3 σ t 1 μ( 1 2 + 2 2 )σ 3

and the finitely presented D-module M=D 1×4 /(D 1×3 R).

The matrices P and Q defined by

P=00000σμ t 0σμ 2 00σμ t σμ 3 0 t 2 t 3 ( 2 2 + 3 2 )D 4×4 ,

Q=000 1 2 σμ t 2 2 2 3 1 3 2 3 σμ t 3 2 D 3×3 ,

satisfy the relation RP=QR, and thus, define a D-endomorphism f of M. Moreover, we can check that ker D (.P), coim D (.P), ker D (.Q) and coim D (.Q) are free D-modules of rank 2, 2, 1 and 2. Hence, computing bases of these free D-modules by means of a constructive version of the Quillen-Suslin theorem, we obtain:

U 1 =10000 2 3 σμ,U 2 =1 σμ01000010,V 1 =100,V 2 =010001.

Defining U=(U 1 T U 2 T ) T GL 4 (D) and V=(V 1 T V 2 T ) T GL 3 (D), we get that R ¯=VRU 1 is the block-triangular matrix defined by:

R ¯=σ t 1 μ( 2 2 + 3 2 )1 μ 1 001 μ 1 2 1 μ 2 σ(σμ t ( 1 2 + 2 2 + 3 2 ))01 μ 1 3 1 μ 3 0σ(σμ t ( 1 2 + 2 2 + 3 2 )).

Now, we recall that a projector f end D (M) is a D-endomorphism f of M satisfying f 2 =f. We can now state another important result of [9] on the decomposition of D-modules for which the Quillen-Suslin theorem plays a central role.

Theorem 11

([9]) Let RD q×p , M=D 1×p /(D 1×q R) and f end D (M) be a projector defined by two idempotents PD p×p and QD q×q , namely, they satisfy RP=QR, P 2 =P and Q 2 =Q. Then, there exist four matrices U 1 D m×p , U 2 D (pm)×p , V 1 D l×q and V 2 D (ql)×q such that

U=(U 1 T U 2 T ) T GL p (D),V=(V 1 T V 2 T ) T GL q (D),

and

R ¯=VRU 1 =V 1 RW 1 00V 2 RW 2 D q×p ,

where U 1 =(W 1 W 2 ), W 1 D p×m and W 2 D p×(pm) .

In particular, the full row rank matrix U 1 (resp., U 2 , V 1 , V 2 ) defines a basis of the free D-module ker D (.P), (resp., im D (.P)=ker D (.(I p P)), ker D (.Q), im D (.Q)=ker D (.(I q Q))) of rank respectively m, pm, l, ql. In other words, we have:

ker D (.P)=D 1×m U 1 , im D (.P)=D 1×(pm) U 2 ,ker D (.Q)=D 1×l V 1 , im D (.Q)=D 1×(ql) V 2 .

Let us illustrate Theorem 11 by means of an example coming from control theory.

Example 20

Let us consider the differential time-delay system describing the movement of a vibrating string with an interior mass studied in [33], namely,

ϕ 1 (t)+ψ 1 (t)ϕ 2 (t)ψ 2 (t)=0,ϕ ˙ 1 (t)+ψ ˙ 1 (t)+η 1 ϕ 1 (t)η 1 ψ 1 (t)η 2 ϕ 2 (t)+η 2 ψ 2 (t)=0,ϕ 1 (t2h 1 )+ψ 1 (t)u(th 1 )=0,ϕ 2 (t)+ψ 2 (t2h 2 )v(th 2 )=0,(54)

where h 1 and h 2 R + are such that Qh 1 +Qh 2 is a two-dimensional Q-vector space (i.e., there exists no relation of the form mh 1 +nh 2 =0, where m,nZ), η 1 and η 2 are two non-zero constant parameters of the system.

Let us consider the ring of differential time-delay operators D=Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 , where (dy/dt)(t)=y ˙(t) and (σ i y)(t)=y(th i ), for i=1,2. The condition on h 1 and h 2 implies that the two time-delay operators σ 1 and σ 2 are incommensurable, i.e., define two independent variables. Hence, D is a commutative polynomial ring. Let us denote by R the system matrix of (54), namely,

R=111100d dt+η 1 d dtη 1 η 2 η 2 00σ 1 2 100σ 1 0001σ 2 2 0σ 2 D 4×6 ,

and the finitely presented D-module M=D 1×6 /(D 1×4 R).

Computing projectors of end D (M), we obtain a projector f defined by the following two idempotent matrices:

P=100000σ 1 2 000σ 1 0000σ 2 2 0σ 2 000100000010000001,Q=101101d dt+η 1 η 2 00000000.

Moreover, we can check that ker D (.P), im D (.P), ker D (.P) and im D (.P) are free D-modules of rank 2, 4, 2 and 2. Computing bases by means of a constructive version of the QuillenSuslin theorem, we then get:

ker D (.P)=D 1×2 U 1 ,U 1 =σ 1 2 100σ 1 0001σ 2 2 0σ 2 , im D (.P)=D 1×4 U 2 ,U 2 =100000000100000010000001,ker D (.Q)=D 1×2 V 1 ,V 1 =00100001, im D (.Q)=D 1×2 V 2 ,V 2 =101101d dtη 1 η 2 .

Forming the matrices U=(U 1 T U 2 T ) T GL 6 (D) and V=(V 1 T V 2 T ) T GL 4 (D), we obtain that R is then equivalent to the block-diagonal matrix R ¯=VRU 1 :

100000010000001σ 1 2 σ 2 2 1σ 1 σ 2 00σ 1 2 d dtη 1 d dt+η 1 η 2 (σ 2 2 +1)σ 1 d dt+η 1 η 2 σ 2 .

Now, considering the second diagonal block, namely,

S=1σ 1 2 σ 2 2 1σ 1 σ 2 σ 1 2 d dtη 1 d dt+η 1 η 2 (σ 2 2 +1)σ 1 d dt+η 1 η 2 σ 2 ,

and the D-module L=D 1×4 /(D 1×2 S). Using an algorithm developed in [9], we obtain that a projector g end D (L) is defined by the two idempotent matrices:

P =1000a0b000100001,Q =1 2σ 2 2 +11 η 2 (σ 2 2 1)η 2 (σ 2 2 +1)σ 2 2 +1,

with the notations:

a=1 2η 2 σ 1 2 d dt(η 1 +η 2 )d dt+(η 2 η 1 ),b=σ 1 2η 2 d dt(η 1 +η 2 ).

We can check that the D-modules ker D (.P ), im D (.P )=ker D (.(I 4 P )), ker D (.Q ) and im D (.Q )=ker D (.(I 2 Q )) are free and, using a constructive version of the Quillen-Suslin theorem, we obtain that ker D (.P )=DU 1 , im D (.P )=D 1×3 U 2 , ker D (.Q )=DV 1 and im D (.Q )=DV 2 , where:

U 1 =σ 1 2 d dtη 1 η 2 d dt+η 1 η 2 2η 2 σ 1 d dtη 1 η 2 0,U 2 =1000σ 1 010σ 1 2 σ 2 (dη 1 η 2 )σ 2 (d+η 1 η 2 )0σ 1 σ 2 (dη 1 η 2 )2η 2 ,V 1 =(η 2 1),V 2 =(η 2 (σ 2 2 +1)σ 2 2 1).

Defining U =(U 1 T U 2 T ) T GL 4 (D) and V =(V 1 T V 2 T ) T GL 2 (D), we get:

S ¯=V SU 1 =10000d dt+η 1 +η 2 σ 1 d dt+η 2 η 1 σ 2 .

If we denote by diag (A,B) the diagonal matrix formed by A and B and define the new matrices U = diag (I 2 ,U ) GL 6 (D) and V = diag (I 2 ,V ) GL 4 (D), then we get:

R ¯ ¯=(V V)R(U U) 1 = diag (I 2 ,S ¯).

The last result proves that the system defined by (54) with 6 unknowns and 4 equations is in fact equivalent to the following simple equation:

z ˙ 1 (t)+(η 1 +η 2 )z 1 (t)+z ˙ 2 (th 1 )+(η 2 η 1 )z 2 (th 1 )z 3 (th 2 )=0.(55)

Using the results summed up in Figure 1, the D-module defined by

M =D 1×3 /Dd dt+η 1 +η 2 σ 1 d dt+η 2 η 1 σ 2 M,

is reflexive but not projective, i.e., not free, as we have

J= ann D ( ext D 3 (T(M ),D))=σ 1 ,σ 2 ,d dt+η 1 +η 2 ,

and dim C V(J)=0. As we have σ 1 ,σ 2 J, we obtain that the Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 1 1 -module Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 1 1 D M is free, i.e., (55) is σ 1 -free ([6], [32]). Computing an injective parametrization of (55), we obtain

z 1 =σ 1 σ 2 y 1 +σ 1 d dt+η 2 η 1 y 2 ,z 2 =σ 2 y 1 d dt+η 1 +η 2 y 2 ,z 3 =2η 1 y 1 ,(56)

and a basis of Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 1 1 -module Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 1 1 D M is then defined by:

y 1 =1 2η 1 σ 1 1 z 3 ,y 2 =1 2η 1 (σ 1 1 z 1 +z 2 ).

Using (56) and the transformation (ϕ 1 ,ψ 1 ,ϕ 2 ,ψ 2 ,u,v) T =(U U) 1 (z 1 ,z 2 ,z 3 ) T , we get an injective parametrization of (54) if we also use the advance operator σ 1 1 .

Finally, the Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 2 1 -module Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 2 1 D M is free and, from (55), we obtain that

z 3 (t)=z ˙ 1 (t+h 2 )+(η 1 +η 2 )z 1 (t+h 2 )+z ˙ 2 (th 1 +h 2 )+(η 2 η 1 )z 2 (th 1 +h 2 ),

showing that the Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 2 1 -module Q(η 1 ,η 2 )d dt,σ 1 ,σ 2 ,σ 2 1 D M admits the basis {z 1 ,z 2 }. Using the transformation defined by (U U) 1 , we get an injective parametrization of (54) if we also use the advance operator σ 2 1 .

Generalizations of Theorems 10 and 11 hold for some classes of non-commutative polynomial rings of functional operators. See [9] for more details. However, we need to be able to compute bases of free modules over the corresponding rings. A general algorithm has recently been developed in [53], [55] for the ring of differential operators with polynomial or rational coefficients (the so-called Weyl algebras). See [54] for an implementation of this algorithm and a library of examples which illustrates it.

8. Conclusion

In this paper, we have shown new applications of constructive versions of the Quillen-Suslin theorem to mathematical systems theory. In particular, we explained that the construction of bases of a free module over a commutative polynomial ring D gives us a way to obtain flat outputs of the corresponding flat multidimensional linear system as well as injective parametrizations of all of its solutions over a D-module F. We have also shown that a flat multidimensional system was algebraically equivalent to the 1-D controllable linear systems obtained by setting all but one functional operator to particular values in the system matrix. This last result gives an answer to a natural question arising in the study of flat multidimensional linear systems and particularly in the study of differential time-delay systems. Moreover, we gave constructive algorithms for two well-known problems stated by Lin and Bose in the literature of multidimensional systems. These problems are generalizations of Serre´s conjecture. We also show that the computation of (weakly) left-/right-coprime factorizations of rational transfer matrices can constructively be solved by means of the Quillen-Suslin theorem. The need for the computation of bases of free D-modules recently appeared as an important issue in the study of the decomposition problems of multidimensional linear systems. Finally, we have demonstrated the different algorithms by means of the recent implementation of the Quillen-Suslin theorem in the package QuillenSuslin. To our knowledge, this is the first implementation of the Quillen-Suslin theorem in a computer algebra system which is nowadays freely available and dedicated to applications of the Quillen-Suslin theorem and, in particular, to mathematical systems theory and control theory.

New applications of the Quillen-Suslin theorem and of the package QuillenSuslin will be studied in the future (e.g., algebraic geometry, signal processing). Moreover, an interesting but difficult problem is to constructively recognize when a finitely presented D=A[x]-module M=D 1×p /(D 1×q R), where RD q×p and A is a commutative ring, is extended, namely, when there exists SA q ×p such that MD A P, where P=A 1×p /(A 1×q S). See [57] for more details. It is well-known that the Quillen-Suslin theorem is a particular case of this problem when M is a projective D-module ([24], [25], [56], [57]). If we can effectively solve this problem for particular classes of D-modules, then, for every D-module F, we obtain ker F (R.)ker F (S.), which shows that the integration of the system ker F (R.) is algebraically equivalent to the integration of the system ker F (S.) which contains one functional operator less. Such a result may simplify the explicit integration of these classes of functional systems. Finally, another interesting problem is the computation of a minimal set of generators of a finitely presented D=A[x]-module M=D 1×p /(D 1×q R), where RD q×p . The results recently obtained in [9], [10] were able to explicitly answer these last two questions on particular examples coming from mathematical physics and control theory. However, the general case seems to be far from being solved.

Finally, more heuristic methods need to be developed and implemented in QuillenSuslin in order to avoid as much as we can the use of the general algorithm for solving Problem 2. Different QS-algorithms need also to be implemented in QuillenQuillen and particularly the one recently developed in [29], [61].

9. Appendix: QuillenSuslin, a package for computing bases of free modules over commutative polynomial rings

9.1. Description of the package QuillenSuslin

The package QuillenSuslin is an implementation of a constructive version of the Quillen-Suslin Theorem. The main idea of the algorithm was inspired by the article of Logar and Sturmfels [27]. Nevertheless, many important changes were introduced. We have roughly described the implemented algorithm in Section 3.4.

The general algorithm proceeds by induction on the number n of independent variables x i in the polynomial ring D=k[x 1 ,...,x n ] and each inductive step, that reduces the problem by one independent variable, consists of the following three main parts:

  1. Finding a normalized component in a polynomial vector by means of a change of coordinates (NormalizationStep).

  2. Computing a finite number of local solutions (local loop) using Horrocks´ theorem (Horrocks).

  3. Patching local solutions of Problem 2 together to get a global one (Patch).

This general method is generally quite involved. The package consists of procedures completing a unimodular polynomial row which admits a right-inverse to a square invertible matrix over a given commutative polynomial ring with coefficients in Q or Z. The implementation was improved by many heuristic methods which are used as soon as it is possible. It allows us to avoid the inductive step and leads to simpler outputs (smaller coefficients and lower degrees).

QuillenSuslin uses the library Involutive ([3]) for computing Janet bases over commutative polynomial rings.

with(Involutive):

with(QuillenSuslin);

[BasisOfCokernelModule,Cofactors,CompleteMatrix,DenomOf,Heuristic,Horrocks,InjectiveParametrization,InvertibleIn,IsInS,IsMonic,IsParkNormalized,IsRegular,IsUnimod,LC,LCFactorization,LM,Laurent2Pol,LaurentNormalization,LinBose1,LinBose2,LowestDegree,MaxMinors,MaximalFF,MaximalQQ,MaximalZZ,NormalizationStep,OneLocalSol,OneStepEY,OneStepQS,ParkAlgorithm,ParkMatrixNormalization,Patch,QSAlgorithm,ReduceBasisDegree,ReduceDeg,RightInverse,RightInverseFast,SHeuristic,SetLastVariableA,SuslinLemma,WLCFactorization,WRCFactorization]

9.1.1. The main functions of the package QuillenSuslin

QSAlgorithm Compute a unimodular matrix U which transforms a row vector admitting a right-inverse into a matrix of the form (I0)
CompleteMatrix Complete a matrix admitting a right-inverse to a unimodular matrix
HEURISTIC Test whether or not a heuristic method can be applied for the given row vector
BasisOfCokernelModule Compute a basis of a free module finitely presented by the given matrix

9.1.2. Important functions of the package QuillenSuslin

Horrocks Implementation of Horrock´s theorem which computes a solution of Problem 1 over a given local ring
IsMonic Test whether or not a polynomial row vector has a monic component
IsRegular Test whether or not a polynomial row vector forms a regular sequence
IsUnimod Test whether or not a matrix admits a right-inverse
MaximalFF Find a maximal ideal over a given one in a polynomial ring with coefficient in a finite field
MaximalQQ Find a maximal ideal over a given one in a polynomial ring with rational coefficients
MaximalZZ Find a maximal ideal over a given one in a polynomial ring with integer coefficients
NormalisationStep Compute an invertible transformation and a change of variables such that the last component of the transformed row becomes monic in the last new variable
OneLocalSol Compute a matrix which is unimodular over some localization of the polynomial ring and transforms the given matrix to (I0)
OneStepEYOneStepQS One inductive step of the general algorithm: return a unimodular matrix which transforms the given matrix into a matrix where the last variable equals 0
Patch Patching procedure: patch local solutions together
SuslinLemma Implementation of Suslin´s Lemma which computes a polynomial h in the ideal generated by polynomials p and q such that deg(h)=deg(p)1 and its leading coefficient is a coefficient of the polynomial q

9.1.3. Low level functions of the package QuillenSuslin

Cofactors Compute cofactors of a (p1)×p-matrix
DenomOf Compute the common denominator of entries of a rational matrix
LM Return the leading monomial of a polynomial with respect to the given variable
LC Return the leading coefficient of a polynomial with respect to the given variable
MaxMinors Return the maximal minors of a given matrix
ReduceDeg Reduce degrees of the components of a polynomial row vector with respect to given variable
RightInverse, RightInverseFast Compute a right-inverse of a row vector
ReduceBasisDegree Reduce degrees of the elements of basis of a free module over a commutative polynomial ring

9.1.4. Functions of QuillenSuslin for mathematical systems theory

InjectiveParametrization Compute an injective parametrization of a flat multidimensional linear system
LCFactorization Compute a left-coprime factorization of a rational transfer matrix when it exists
LinBose1 Compute a solution of Problem 3 when it exists
LinBose2 Compute a solution of Problem 4 when it exists
RCFactorization Compute a right-coprime factorization of a rational transfer matrix when it exists
SetLastVariableA Compute a unimodular matrix which transforms the given matrix into a matrix where the last variable is set to a given constant A
WLCFactorization Compute a weakly left-coprime factorization of a rational transfer matrix when it exists
WRCFactorization Compute a weakly right-coprime factorization of a rational transfer matrix when it exists

9.1.5. Functions of QuillenSuslin for Laurent polynomial rings

IsParkNormalized Test whether or not a Laurent polynomial is normalized, i.e., whether or not all its coefficients are Laurent monomials
Laurent2Pol Compute a transformation which maps a row vector over a Laurent polynomial ring into a row vector over a polynomial ring
LaurentNormalization Return a change of variables which normalizes a Laurent polynomial
LowestDegree Return the lowest degree of a Laurent polynomial with respect to the given variable
ParkAlgorithm Return a unimodular matrix over the Laurent polynomial ring which transforms the given matrix into a matrix of the form (I0)

9.1.6. Functions of QuillenSuslin for localizations

InvertibleIn Find an element in the intersection of an ideal and a multiplicative closed subset of the polynomial ring
IsInS Test whether or not a polynomial belogns to a given multiplicative subset of the polynomial ring
SHeuristic Test whether or not a heuristic method can be used over a localization of the polynomial ring

To our knowledge, the QuillenSuslin package is the only package dedicated to the implementation of the Quillen-Suslin theorem (see [12] for a partial one) and its applications to mathematical physics, control theory and signal processing. An OreModules version of QuillenSuslin will soon be available on the OreModules web site [4] which will extend [12]. Applications of the Quillen-Suslin theorem to algebraic geometry will be studied in the future.

9.2. Classical examples

We first want to illustrate the QuillenSuslin package on some classical examples appearing in the literature and, in particular, in [61], [19], [23], [38].

9.2.1. Example taken from [19]

We consider the row vector R over the polynomial ring D=Z[x] given in [19].

In the QuillenSuslins package, all the computations are performed for a commutative polynomial ring with rational coefficients if the last parameter is set to true and with integer coefficients if the last parameter is set to false.

We first declare the independent variables x of the polynomial ring by setting

var:=[x];

var:=[x]

and then the row vector R:

R:=[13, x^2-1, 2*x-5];

R:=[13,x 2 1,2x5]

Let us check whether or not R admits a right-inverse over the ring D.

RightInverse(R, var, false);

[5536x+6x 2 ,6,14436x]

Applying the QSAlgorithm procedure to the row vector R, we then obtain:

U:=QSAlgorithm(R, var, false);

U:=[5536x+6x 2 ,64818532x+4175x 2 900x 3 +72x 4 ,(5536x+6x 2 )(2x5)][6,707+468x72x 2 ,30+12x][14436x,72(x4)(5939x+6x 2 ),721468x+72x 2 ]

The matrix U is unimodular over D and RU=(100) as we have:

Determinant(U);

1

simplify(Matrix(R).U);

100

We note that the QSAlgorithm procedure uses a heuristic method as the first two components of the right-inverse of R generate the ring D. Hence, the general algorithm can be avoided in this example:

Heuristic(R, var, false);

[5536x+6x 2 ,64818532x+4175x 2 900x 3 +72x 4 ,(5536x+6x 2 )(2x5)][6,707+468x72x 2 ,30+12x][14436x,72(x4)(5939x+6x 2 ),721468x+72x 2 ]

We can check that R is the first row of the inverse U 1 of U:

U_inv:=CompleteMatrix(R,var, false);

U_inv:=13x 2 12x565536x+6x 2 0144+36x1188x360x 2 +36x 3 12961

The residue classes of the last two rows of the matrix U 1 define a basis of the finitely presented D-module M=D 1×3 /(DR).

BasisOfCokernelModule(R, var, false);

65536x+6x 2 0144+36x1188x360x 2 +36x 3 12961

We can reduce the degree of the components of the rows defining the basis:

BasisOfCokernelModule(R, var, false, reduced);

0246x1728324+12x

The injective parametrization of the system defined by R is then defined by:

InjectiveParametrization(Matrix(R), var, false);

64818532x+4175x 2 900x 3 +72x 4 (5536x+6x 2 )(2x5)707+468x72x 2 30+12x72(x4)(5939x+6x 2 )721468x+72x 2

9.2.2. Example taken from [23]

We consider the vector vector R with entries in the ring D=Q[x,y] defined by:

var:=[x,y];

var:=[x,y]

R := [x^2*y+1, x+y-2, 2*x*y];

R:=[x 2 y+1,x+y2,2xy]

We can check that ideal generated by the entries of R generates D as we have:

IsUnimod(R, var);

true

Therefore, the row vector R admits a right-inverse over D and then defines a projective D-module M=D 1×2 /(DR), i.e., free by the Quillen-Suslin theorem.

As the first and the last components of R generate the ring D, we know that we can use a heuristic method for computing a basis of the D-module M. This last result can be checked as follows once we note that we are working over the field Q and then need to set the last parameter to true in the procedures:

U:=Heuristic(R, var, true);

U:=12yx2xy010x 2x(x+y2) 2x 2 y+1

We can check that the entries of the inverse U inv of the matrix U belong to D, i.e., U GL 3 (D), and its first row is R:

U_inv:=CompleteMatrix(R, var, true);

U_inv:=x 2 y+1x+y22xy010x 201

The residue classes of the last two rows of U inv in M form a basis of M. This result can directly be obtained as follows:

BasisOfCokernelModule(Matrix(R), var, true);

010x 201

The injective parametrization of the system defined by R is given by the last two columns of U, a fact that can directly be obtained by:

InjectiveParametrization(Matrix(R), var, true);

2yx2xy10x(x+y2) 2x 2 y+1

9.2.3. Example taken from [61]

We consider the row vector R with entries in the polynomial ring D=Q[x,y] ([61]):

var:=[x,y]:

R:=[x-4*y+2,x*y+x,x+4*y^2-2*y+1];

R:=[x4y+2,xy+x,x+4y 2 2y+1]

We can check that ideal generated by the entries of R defines D as we have:

IsUnimod(R, var, true);

true

Hence, R admits a right-inverse over D defined by:

RightInverse(R, var, true);

[y,1,1]

Hence, the D-module M=D 1×3 /(DR) is projective, i.e., free by the Quillen-Suslin theorem. Let us compute a basis of M. We can first try to check if a basis can be obtained by means of a heuristic method implemented in QuillenSuslin:

U:=Heuristic(R, var, true);

U:=y2y+4y 2 xy+1y(x+4y 2 2y+1)1x4y+2x+4y 2 2y+11x+4y2x4y 2 +2y

We then can check that U solved Problem 2 as we have:

Determinant(U);

1

simplify(Matrix(R).U);

100

As the command QSAlgorithm first tries the heuristic methods which have been implemented before using the general algorithm, its output is the same as the one obtain by the command Heuristic:

QSAlgorithm(R, var, true);

y2y+4y 2 xy+1y(x+4y 2 2y+1)1x4y+2x+4y 2 2y+11x+4y2x4y 2 +2y

We can check that the first row of the inverse U inv of U is exactly the row vector R:

U_inv:=CompleteMatrix(R, var, true);

U_inv:=x4y+2xy+xx+4y 2 2y+11y0011

The residue classes of the last two rows of U inv in M form a basis of M. This result can directly be obtained by doing:

BasisOfCokernelModule(Matrix(R), var, true);

1y0011

Finally, the injective parametrization of the system defined by R is given by the last two columns of the matrix U, namely:

InjectiveParametrization(Matrix(R), var, false);

2y+4y 2 xy+1y(x+4y 2 2y+1)x4y+2x+4y 2 2y+1x+4y2x4y 2 +2y

9.2.4. Example taken from [38]

We now consider the row vector R over a polynomial ring D=Z[x,y,z] defined in [38]. Let us first introduce the independent variables x, y and z:

var:=[x,y,z];

var:=[x,y,z]

We then define the 4 components of the row vector R:

f1:=1-x*y-2*z-4*x*z-x^2*z-2*x*y*z+2*x^2*y^2*z-2*x*z^2-2*x*z^2-2*x^2*z^2+2*x*z^2+2*x^2*y*z^2:f2:=2+4*x+x^2+2*x*y-2*x^2*y^2+2*x*z+2*x^2*z-2*x^2*y*z:f3:=1+2*x+x*y-x^2*y^2+x*z+x^2*z-x^2*y*z:f4:=2+x+y-x*y^2+z-x*y*z:

The row vector R is then defined by:

R:= [f1, f2, f3, f4];

R:=[1xy2z4xzx 2 z2xyz+2x 2 y 2 z2xz 2 2x 2 z 2 +2x 2 yz 2 ,2+4x+x 2 +2xy2x 2 y 2 +2xz+2x 2 z2x 2 yz,1+2x+xyx 2 y 2 +xz+x 2 zx 2 yz,2+x+yxy 2 +zxyz]

Let us test whether or not the ideal generated by the entries of R defines D:

IsUnimod(R, var, false);

true

Hence, the row vector R admits a right-inverse over D and the D-module M=D 1×4 /(DR) is projective, i.e., free by the Quillen-Suslin theorem. Let us compute a basis of the D-module M. We can first check that the second and the third components of R generate the whole ring D, so a heuristic method can be used in this example. This result can directly be checked by doing:

U:=Heuristic(R, var, false);

U:=[0,1,0,0][4+3z+4y2xy 2 xyz+2xz+z 2 +3yz2xy 2 zxyz 2 +xz 2 +2y 2 xy 3 ,(43z4y+2xy 2 +xyz2xzz 2 3yz+2xy 2 z+xyz 2 xz 2 2y 2 +xy 3 )(1xy2z4xzx 2 z2xyz+2x 2 y 2 z2xz 2 2x 2 z 2 +2x 2 yz 2 ),12xxy+x 2 y 2 xzx 2 z+x 2 yz,(43z4y+2xy 2 +xyz2xzz 2 3yz+2xy 2 z+xyz 2 xz 2 2y 2 +xy 3 )(2xy+xy 2 z+xyz)][%1,%1(1xy2z4xzx 2 z2xyz+2x 2 y 2 z2xz 2 2x 2 z 2 +2x 2 yz 2 ),2+4x+x 2 +2xy2x 2 y 2 +2xz+2x 2 z2x 2 yz,%1(2xy+xy 2 z+xyz)][0,0,0,1]%1:=76z8y2x5xz+4xy 2 +2xyz2z 2 6yz2xz 2 +4xy 2 z+2xyz 2 4y 2 xy+2xy 3

We can check that the matrix U is a solution of Problem 2 as we have:

Determinant(U);

1

simplify(Matrix(R).U);

1000

As the general procedure QSAlgorithm first tries to use heuristic methods described in Section 3.3 before applying the general algorithm, it returns the same output as the one obtaind with Heuristic. We also know that the first row of the inverse of U is R, a fact that can be checked using the procedure CompleteMatrix:

B:=CompleteMatrix(R, var, false);

B:=[1xy2z4xzx 2 z2xyz+2x 2 y 2 z2xz 2 2x 2 z 2 +2x 2 yz 2 ,2+4x+x 2 +2xy2x 2 y 2 +2xz+2x 2 z2x 2 yz,1+2x+xyx 2 y 2 +xz+x 2 zx 2 yz,2+x+yy 2 x+zxyz][1,0,0,0][0,7+6z+8y+2x+5xz4y 2 x2xyz+2z 2 +6yz+2xz 2 4y 2 xz2xyz 2 +4y 2 +xy2xy 3 ,4+3z+4y+2xz2y 2 xxyz+z 2 +3yz+xz 2 2y 2 xzxyz 2 +2y 2 xy 3 ,0][0,0,0,1]

A basis of the D-module M can be obtained by:

BasisOfCokernelModule(Matrix(R), var, false);

[1,0,0,0][0,7+6z+8y+2x+5xz4y 2 x2xyz+2z 2 +6yz+2xz 2 4y 2 xz2xyz 2 +4y 2 +xy2xy 3 ,4+3z+4y+2xz2y 2 xxyz+z 2 +3yz+xz 2 2y 2 xzxyz 2 +2y 2 x