2 0 0 2
Titre
Intervenant
Date
Heure
Lieu
Suivi (Tracking) de regions dans les sequences d'images par equations d'evolution de courbes de niveau (level set).
Abdol-Reza Mansouri

Chercheur a l'INRS Telecommunications
Universite du Quebec a Montreal, Canada
21 janvier 10:30 Salle 006
Besov, Bayes, and Plato in Multiscale Image Modeling
Richard G. Baraniuki

Department of Electrical
and Computer Engineering
Rice University, USA
4 fevrier 10:30 Salle Coriolis
Batiment Galois
Robust, Multi-scale Optimization in Vision and Data Analysis:
The Statistical Learning Approach
Joachim M. Buhmann

Institut fur Informatik III Universitat
Bonn
11 mars 10:30 Salle 006
Adaptive Feature Selection for Supervised Learning
Mario Figueiredo

Institute of Telecommunications
Instituto Superior Tecnico
Lisboa (Portugal)
15 avril 10:30 Salle 006
Approximations de type champ moyen pour la sélection de modèles de Markov cachés
Florence Forbes

INRIA Rhone-Alpes
Projet IS2
27 mai 10:30 Salle 006
Restauration des images par la transformee Curvelet
Jean-Luc Starck

Service d'Astrophysique
Centre d'Etudes de SACLAY
17 juin 10:30 Salle 006
CORT: Classification OR Regression Trees
Robert-D Nowak

Rice University, Houston
Electrical and Computer Engineering
18 juin 10:30 Coriolis
Revisting the Geometric Heat Flow: A Polygonal Flow Alternative
Hamid Krim

Electrical and Computer Engineering Dept
North Carolina State University, Raleigh USA
27 juin 16:00 Salle 006
Composite Texture Synthesis
Alexey Zalesny

Computer Vision Lab
Swiss Federal Institute of Technology
Zurich, Suisse
1er juillet 14:30 Salle 003
Analysis of Planar Shapes Using Geodesic Lengths on a Shape Manifold
Anuj Srivastava

Department of Statistics
Florida State University
Tallahassee, USA
15 juillet 10:30 Salle 003
Clustering based on distribution-free statistics
Eric Pauwels

Centre for Mathematics
and Computer Science (CWI)
Amsterdam, The Netherlands

16 septembre 10:30 Salle Coriolis
Multiband image wavelet representations
Paul Scheunders

Vision Lab, Dept of Physics
University of Antwerp
Belgique
14 octobre 10:30 Salle E006
A Minimum Entropy Approach to Adaptive Image Polygonization
Lothar Hermes

Institut fur Informatik III
Rhein. Friedrich-Wilhelms-Universitat Bonn

4 novembre 10:30 Salle E006
Disocclusion by Joint Interpolation of Vector Fields and Gray Levels
Vincente Caselles

Universitat Pompeu-Fabra
Barcelone
18 novembre 10:30 Salle E003
Caractérisation et contrôle des pratiques culturales par vision artificielle et analyse de texture
Christophe Debain

Cemagref
Clermont-Ferrand

6 decembre 10:30 Salle Coriolis
Utilisation de chaînes de Markov cachées (HMMs) couplées et des réseaux bayesiens pour l'analyse et la reconnaissance de caractères manuscrits et imprimés
Marc Sigelle

ENST Paris
Dept Traitement du Signal et des Images

9 decembre 10:30 Salle E006
Reconnaissance de classes et reconnaissance d'objets.
Pierre Moreels

Computer Vision Lab
Caltech Pasadena, USA

16 decembre 10:30 Salle E006







Résumés



Abdol-Reza Mansouri
Suivi (Tracking) de regions dans les sequences d'images par equations d'evolution de courbes de niveau (level set).

De nombreux travaux ont ete effectues sur le suivi de regions dans les sequences d'images et plus recemment, sur le suivi par equations d'evolution de courbes de niveau (level sets). C'est la un des problemes importants de la vision par ordinateur, avec quantite d'applications en codage et en analyse d'images. La plupart de ces travaux se basent cependant sur des hypotheses contraignant severement le mouvement de la region suivie (region supposee en mouvement sur fond immobile) ou ses proprietes d'intensite (fort contraste d'intensite entre la region et le fond). Ainsi, les equations de suivi obtenues sont fortement tributaires de ces hypotheses a tel point que le suivi n'est plus qu'un sous-produit de la detection de frontieres d'intensite ou de regions en mouvement. Le but de ce travail est de definir la forme generale des equations d'evolution de courbes de niveau pour le suivi de regions, sans hypothese directe ni sur le mouvement, ni sur la forme, ni sur les proprietes d'intensite de la region suivie. Le point de depart de ce travail est la formulation du probleme du suivi en tant que probleme d'estimation Bayesienne. Nous presenterons en detail les etapes menant a la solution de ce probleme d'estimation et nous illustrerons la performance des equations obtenues sur de nombreuses sequences contenant divers types de mouvement (rigide et non rigide, grands et petits deplacements, fond fixe et mobile) et de contraste entre la region suivie et le fond.


Richard G. Baraniuki
Besov, Bayes, and Plato in Multiscale Image Modeling

There currently exist two distinct paradigms for modeling images. In the first, images are regarded as functions from a deterministic function space, such as a Besov smoothness class. In the second, images are treated statistically as realizations of a random process. This talk with review and indicate the links between the leading deterministic and statistical image models, with an emphasis on multiresolution techniques and wavelets. To overcome the major shortcomings of the Besov smoothness characterization, we will develop new statistical models based on mixtures on graphs. To illustrate, we will discuss applications in image estimation and segmentation.


Joachim M. Buhmann
Robust, Multi-scale Optimization in Vision and Data Analysis: The Statistical Learning Approach

Robust models and efficient algorithms for inference are the central focus of pattern recognition and, in particular, of computational vision. Methods for data analysis should include three components: (i) Structures in data have to be mathematically defined and evaluated, e.g., partitions, hierarchies and projections are widely studied in the literature. This design step is highly task dependent and might yield different answers for the same data. (ii) Efficient search procedures should be devised for prefered structures. (iii) Solutions of this optimization process should be validated. As an example, this program for robust modeling and algorithms design is demonstrated for the problem of image segmentation based on texture and color cues. The assignment variables of pixels or image patches to segments are robustly estimated by entropy maximizing algorithms like Markov Chain Monte Carlo methods or deterministic annealing with EM-like iterative updates of model parameters. Multi-scale coarsening of the optimization variables yields a significant acceleration of the optimization process. To validate the optimization results we have to determine a minimal approximation precision to become robust against overfitting to data fluctuations. Information theoretic methods can be used to derive upper bounds for large deviations of the optimization procedure.

In the second part of the talk I will present applications of this program to selected data analysis problems, e.g., to joint color quantization and dithering, to the analysis of remote sensing data with adaptive polygonalization, to multi-scale shape description of line drawings and to the traveling salesman problem with unreliable travel times.


Mario Figueiredo
Adaptive Feature Selection for Supervised Learning

Supervised learning is one of the central topic of pattern recognition and machine learning; its goal is to find a functional relationship that maps an "input" to an "output", given a set of (maybe "noisy") examples. The obtained function is evaluated by how well it generalizes, that is, by how accurately it performs on new data, assumed to follow the same distribution from which the training data was obtained.

To achieve good generalization, it is necessary to control the "complexity" of the learned function. In Bayesian approaches, this is done by placing a prior on the parameters defining the function being learned. We propose a Bayesian approach to supervised learning which leads to sparse solutions, that is, in which irrelevant parameters are automatically set exactly to zero. Other ways of achieving this "selection behavior" involve (hyper)parameters which have to be somehow adjusted/estimated from the training data. In contrast, our approach does not involve any (hyper)parameters to be adjusted or estimated.

Experiments with several benchmark data sets show that the proposed approach exhibits state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, although it involves no tuning or adjusting of sparseness-controlling hyper-parameters.


Florence Forbes
Approximations de type champ moyen pour la sélection de modèles de Markov cachés

Les modeles de champs de Markov caches apparaissent naturellement dans des problemes tels que la segmentation d'image ou il s'agit d'attribuer chaque pixel a une classe a partir des observations. Pour cela, choisir le modele probabiliste qui prend le mieux en compte les donnees observees est primordial.

Un critere de selection de modele communement utilise est le Bayesian Information Criterion (BIC) de Schwarz mais dans le cas des champs de Markov caches, la structure de dependance dans le modele rend le calcul exact du critere impossible. Nous proposons des approximations de BIC qui se fondent sur le principe d'approximation en champ moyen issu de la physique statistique. La theorie du champ moyen fournit une approximation des champs de Markov par des systemes de variables ind\'ependantes pour lesquels les calculs sont alors faisables.
A l'aide de ce principe, nous introduisons d'abord une famille de criteres obtenus en approximant la loi markovienne qui apparait dans l'expression usuelle de BIC sous forme de vraisemblance penalisee. Nous considerons ensuite une reecriture de BIC en termes de constantes de normalisation (fonctions de partition) qui a l'avantage de permettre l'utilisation d'approximations plus fines. Nous en deduisons de nouveaux criteres en utilisant des bornes optimales des fonctions de partitions.

Pour illustrer les performances de ces derniers, nous considerons le probleme du choix du bon nombre de classes pour la segmentation.
Les resultats observes sur des donnees simulees et reelles sont prometteurs. Ils confirment que ce type de criteres prend bien en compte l'information spatiale. En particulier, les resultats obtenus sont meilleurs qu'avec le critere BIC calcule pour des modeles de melanges independants.


Jean-Luc Starck
Restauration des images par la transformee Curvelet

Les ondelettes ont eu un immense succes au cours des dix dernieres annees, et ont ete utilisees pour de nombreuses applications comme le filtrage, la deconvolution ou la compression d'images.
Si les ondelettes sont particulierement efficaces pour la detection de structures isotropes de differentes echelles, elles ne sont par contre pas optimales pour la recherche d'objets anisotropes, tels que des contours. De nouvelles transformees multi-echelles ont recemment ete developpees, les ridgelets et les curvelets, qui permettent de rechercher des objets de maniere optimale, quand ces objets presentent de fortes anisotropies.
Nous decrirons la transformee en ridgelets et la transformee en curvelets, et nous montrerons comment ces nouveaux outils peuvent etre appliques dans des applications de traitement d'images.


Robert-D. Nowak
CORT: Classification OR Regression Trees

Tree-based methods are powerful tools for signal and image estimation and classification. The CART (Classification and Regression Trees) program aims to balance the fitness of a tree (classifier or estimator) to data with the complexity of the tree (measured by the number of leaf nodes). The basic idea in CART is a standard complexity regularization approach: the empirical error (squared error or misclassification rate) is balanced by a penalty term proportional to the number of degrees of freedom in the model (leaf nodes in the tree). The form of the CART optimization criterion is error+a*k, where k is the number of leaf nodes in the tree. This form of regularization leads to a very efficient, bottom-up tree pruning strategy for optimization, and is the ``correct'' form of complexity regularization for estimation/regression problems. However, in the classification case the linear growth of the penalty outpaces the growth of the expected misclassification error, and consequently penalizes larger trees too severely. Rather than employing the same regularization for classification *and* regression, one should consider classification *or* regression as two quite different problems. The appropriate optimization criterion for classification trees takes the form error+a*k^{1/2}.

We review performance bounds for estimation/regression trees and derive new bounds on the performance of classification trees using the square-root complexity regularization criterion. The adaptive resolution of tree classifiers enables them to focus on the d* dimensional decision boundary, instead of estimating the full d > d* dimensional posterior probability function. Under a modest regularity assumption (in terms of the box-counting measure) on the underlying optimal Bayes decision boundary, we show that complexity regularized tree classifiers converge to the Bayes decision boundary at nearly the minimax rate, and that no classification scheme (neural network, support vector machine, etc.) can perform significantly better in this minimax sense. Although minimax bounds in pattern classification have been investigated in previous work, the emphasis has been on placing regularity assumptions on the posterior probability function rather than the decision boundary. Studying the impact of regularity in terms of the decision boundary sheds new light on the ``learning from data'' problem and suggests new principles for investigating the performance of pattern classification schemes.




Hamid Krim
Revisting the Geometric Heat Flow: A Polygonal Flow Alternative

In recent years curve evolution, applied to a single contour or to the level sets of an image via partial differential equations, has emerged as an important tool in image processing and computer vision. Curve evolution techniques have been utilized in problems such as image smoothing, segmentation, and shape analysis. We give a stochastic interpretation of the basic curve smoothing equation, the so called geometric heat equation, and show that this evolution amounts to a tangential diffusion movement of the particles along the contour.

Moreover, assuming that a priori information about the shapes of objects in an image is known, we present generalizations of the geometric heat equation designed to preserve certain features in these shapes while removing noise. We also show how these new flows may be applied to smooth noisy curves without destroying their larger scale features, in contrast to the original geometric heat flow which tends to circularize any closed curve.
We will also briefly discuss their applications for shape approximation.


Alexey Zalesny
Composite Texture Synthesis

Many textures require complex models to describe their intricate structures. Their modeling can be simplified if they are considered composites of simpler subtextures. After an initial, unsupervised segmentation of the composite texture into the subtextures, it can be described at two levels. One is a label map texture, which captures the layout of the different subtextures. The other consists of the different subtextures. This scheme includes also the mutual influences between subtextures, mainly found near their boundaries


Anuj Srivastava
Analysis of Planar Shapes Using Geodesic Lengths on a Shape Manifold

For analyzing shapes of planar, closed curves, we propose a mathematical representation of closed curves using ``direction" functions (integrals of the signed curvature functions with respect to arc lengths). Shapes are represented as elements of an infinite-dimensional manifold and their pairwise differences are quantified using the lengths of geodesics connecting them on this manifold. Exploiting the periodic nature of these representations, we use a Fourier basis to discretize them and use a gradient-based shooting method for finding geodesics between any two shapes. Lengths of geodesics provide a metric for comparing shapes. This metric is intrinsic to the shapes and does not require any deformable template framework. In this talk, I will illustrate some applications of this shape metric: (i) clustering of planar objects based on their shapes, and (ii) statistical analysis of shapes including computation of intrinsic means and covariances.

This research is being performed in collaboration with Prof. Eric Klassen of FSU.


Eric Pauwels
Clustering based on distribution-free statistics

In this talk I will discuss a criterion for cluster-validity that is based on distribution-free statistics. Given a dataset, the idea is to construct the simplest possible data-density that is still compatible with the data. Compatibility is measured by means of statistical tests that quantify how likely the hypothesized underlying distribution is in view of the observed data. Combining this with a criterion for smoothness yields a functional that can be minimized explicitly. The advantage of this approach is that the resulting clustering depends on just one parameter that has a clear probabilistic interpretation. I will discuss the 1-dimensional case in detail and outline how this strategy can be generalized to higher dimensions.



Paul Scheunders
Multiband image wavelet representations

In this work, a wavelet representation of multiband images is presented. The representation is based on a multiresolution extension of the First Fundamental Form that accesses gradient information of vector-valued images. With the extension, multiscale edge information of multiband images is extracted. Moreover, a wavelet representation is obtained that, after inverse transformation, accumulates all edge information in a single greylevel image. In this work, a redundant wavelet representation is presented using dyadic wavelet frames. It is then extended towards orthogonal wavelet bases using the Discrete Wavelet Transformation (DWT). The representation is shown to be a natural framework for image fusion. An algorithm is presented for fusion and merging of multispectral images. The concept is successfully applied to the problem of multispectral and hyperspectral image merging. Other problems that will be discussed are anisotropic diffusion filtering, enhancement and denoising and segmentation of multiband images.


Lothar Hermes
A Minimum Entropy Approach to Adaptive Image Polygonization

In this talk, I introduce a novel adaptive image segmentation algorithm which represents images by polygonal segments. The algorithm is based on an intuitive generative model for pixel intensities. Its associated cost function can be effectively optimized by a hierarchical triangulation algorithm, which iteratively refines and reorganizes a triangular mesh to extract a compact description of the essential image structure. After analyzing fundamental convexity properties of our cost function, an information-theoretic bound is adapted to assess the statistical significance of a given triangulation step. The bound effectively defines a stopping criterion to limit the number of triangles in the mesh, thereby avoiding undesirable overfitting phenomena. It also facilitates the development of a multi-scale variant of the triangulation algorithm, which substantially improves its computational demands. The algorithm has various applications in contextual classification, remote sensing, and visual object recognition. It is particularly suitable for the segmentation of noisy imagery.


Vicente Caselles
Disocclusion by Joint Interpolation of Vector Fields and Gray Levels

We shall discuss a variational approach for filling-in regions of missing data in 2D and 3D digital images. Applications of this technique include the restoration of old photographs and removal of superimposed text like dates, subtitles, or publicity, or the zooming of images. The approach we shall discuss is based on a joint interpolation of the image gray-levels and gradient/isophotes directions, smoothly extending the isophote lines into the holes of missing data. The process underlying this approach can be considered as an interpretation of the Gestaltist's principle of good continuation. We study the existence of minimizers of our functional and its approximation by smoother functionals. Then we present the numerical algorithm used to minimize it and display some numerical experiments.


Christophe Debain
Caractérisation et contrôle des pratiques culturales par vision artificielle et analyse de texture

Dans un contexte où la qualité des produits agricoles devient une préoccupation majeure du consommateur, la mesure de leurs propriétés physico-chimiques est un enjeu important pour notre société. Si cette remarque est indiscutablement vraie pour le produit final vendu, elle se vérifie également tout au long de la chaîne de production. Nos projets s'inscrivent dans cette démarche en traitant de la mesure indirecte de caractéristiques de produits ou de cultures agricoles. Dans ce vaste champ d'applications, nos développements se sont orientés vers l'analyse d'images de ces produits à partir de données d'un capteur vision. Nous nous sommes intéressés plus particulièrement à la texture de ces images. En effet, pour de nombreux produits agricoles la notion de forme ne peut être directement appréhendée (textures végétales, produits d'ensilage, cultures à l'échelle aérienne), et l'étude de la texture de l'image peut permettre une mesure indirecte de caractéristiques physico-chimiques, ouconstituer un outil de segmentation. Il s'agit donc de rechercher les paramètres qui sont liés le plus possible aux caractéristiques des produits étudiés. Certains sont très classiques, mais d'autres proposés plus récemment sont issus d'études visant à construire des milieux complexes dont la statistique est connue (sols, vent....). Ces nouveaux paramètres ajoutent à une bonne discrimination de la texture de l'image, la possibilité de se rapprocher de certaines propriétés physico-chimiques des scènes observées (rugosité des lits de semence, statistiques géométriques des morceaux d'ensilage de maïs...)


Marc Sigelle
Utilisation de chaînes de Markov cachées (HMMs) couplées et des réseaux bayesiens pour l'analyse et la reconnaissance de caractères manuscrits et imprimés

On récapitule dans cet exposé un ensemble de travaux de stages et de thèses menés récemment ou en cours à l'ENST.

Les caractéristiques des caractères imprimés ou manuscrits de type latin permettent d'effectuer une analyse conjointe de l'image d'un caractère selon les deux directions privilégiées : verticale ("colonnes") et horizontale ("lignes"). Dans une premiere approche (caractères imprimés) on considère deux chaînes verticale et horizontale indépendantes (non couplées)

On analyse alors les hypothèses permettant de combiner leurs scores de reconnaissance (fusion). On compare les performances obtenues avec celles d'un modèle HMM unique traitant simultanément les observations verticales et horizontales (fusion de données). L'influence des paramètres sous-jacents : nombre d'états cachés, etc., est analysée pour ces deux stratégies. Des variantes nouvelles ont été introduites pour l'étude des caractères manuscrits: modèle gauche-droite généralisé, analyse "avant-arrière" etc ... Enfin on presente la modélisation par réseaux bayesiens ainsi que des résultats expérimentaux prometteurs récents qui permettraient de traiter de façon très générale le couplage entre plusieurs HMMs.


Pierre Moreels
Reconnaissance de classes et reconnaissance d'objets.

Les problemes de reconnaissance sont un aspect fondamental en intelligence artificielle. Pour etre autonome, un robot doit etre capable d'identifier des points de repere tels que batiments, route, obstacles.

- La premiere partie de la presentation est axee sur la reconnaissance de classes. Celle-ci a pour but de distinguer entre images contenant une voiture, images contenant un animal, etc., sans necessairement identifier precisement quelle voiture ou quel animal. Une question immediate consiste a identifier ce qui est caracteristique d'une classe donnee. Notre modele de "constellation d'elements" represente une classe comme etant un assemblage de parties rigides, dotees de positions relatives variables. La position de chaque element est representee par une densite de probabilites, calculee dans la phase d'apprentissage.

- La deuxieme partie de la presentation s'interessera a la reconnaissance d'objets. Ici l'objectif est plus precis, il s'agit d'identifier les images contenant une voiture ou animal specifique. Chaque objet est represente par un grand nombre de points d'interet, caracterises par leur position, echelle, orientation et apparence. La phase de reconnaissance procede par appariements entre points d'interet de la nouvelle image et points d'interets stockes dans la base de donnees. Le resultat est une transformation entre une image de la base de donnees et la nouvelle image. Le grand nombre de points d'interet mis en jeu assure la fiabilite de la transformation retenue, meme si certains appariements de points d'interet sont inexacts.