2 0 0 1
Title
Speaker
Date
Time
Location
Schema Booleens et applications
Michel Schmitt

Directeur du Laboratoire
Géostatistiques
Ecole des Mines de Paris
Jan. 22 10:30 003
Modelling SAR Images with a Generalisation of the Rayleigh Distribution
Ercan E. Kuruoglu

ERCIM FELLOW
Istituto di Elaborazione della Informazione-CNR,
Pisa, Italy
Jan. 26 10:30 003
Méthodes multiéchelles de restauration des images
Albert Bijaoui

Departement CERGA
Observatoire de la Cote d'Azur
Nice
Feb. 12 10:30 003
ClustFuse: New Bayesian learning and data grouping method with applications to image information content exploration
Mihai Datcu

German Aerospace Center - DLR
Remote Sensing Technology Institute - IMF - Germany
Mar. 12 10:30 006
Perceptual Grouping using Markov Random Fields and Cue Integration
Daniel Schlüter

Technical Faculty
Bielefeld University - Germany
Apr. 23 10:30 003
Unsupervised Detection of Changes in Multitemporal Remote Sensing Images
Sebastiano B. Serpico

Dept of Biophysical and Electronic Engineering
University of Genoa - Italy
May 14 10:30 006
Multiscale Probability Models - Blending Wavelets, Recursive Partitioning, and Graphical Models
Eric D. Kolaczyk

Department of Mathematics and Statistics
Boston University (USA)
June 8 14:30 003
Image magnification with multiresolution analysis
Frederic Truchetet

Université de Bourgogne - Le2i
Laboratoire Génie Electrique et Informatique Industrielle
June 11 14:30 003
L'analyse sans paramètres des images digitales
Jean-Michel Morel

CMLA, ENS Cachan
June 18 10:30 006
Image analysis for road scene analysis
Pierre Charbonnier

Charge de Recherche LCPC,
Laboratoire Regional des Ponts-et-Chaussees de Strasbourg
July 9 10:30 006
From Images to Virtual Cities
Hassan Foroosh

Senior Research Scientist
Dept. of EECS, University of California, Berkeley (USA)
July 23 14:30 006
Multiscale Likelihood Analysis and Inverse Problems in Imaging
Robert Nowak

Invited prof.
Assistant Prof.
Rice University, Dept ECE Houston (USA)
July 25 10:30 006
3-D Super-Resolution
Robin Morris

NASA Ames Research Center
Moffett Field,USA
Sept. 07 10:30 006
Courbes de Niveau Optimales et Minimisation de Fonctions d'Energie en Segmentation d'Images
Charles Kervrann

INRA - Biométrie
Jouy-en-Josas
Sept 17 10:30 003
On missing data in pictures
Anil Kokaram

University of Dublin
Trinity College, Dublin
Sept 20 14:00 003
Gauss Mixture Vector Quatization for Classification and Compression
Robert M. Gray

Information Systems Lab
Department of Electrical Engineering
Stanford University
Oct 22 10:30 006
An Adaptative Scheme for the Solution of the Eikonal Equation and Applications
Maurizio Falcone

Departement de Mathematique
Universite de Rome "La Sapienza"
Italie
Oct 29 14:00 003
Recalage et segmentation d'images médicales volumiques : contraintes statistiques et topologiques.
Fabrice Heitz

Laboratoire des Sciences de l'Image,
de l'Informatique et de la Télédétection
CNRS, Université Strasbourg I.
Nov 19 10:30 003
Une approche MCMC en vision stéréoscopique
Julien Senegas

Centre de Géostatistique
Ecole des Mines de Paris, Fontainebleau
Dec 17 10:30 006
 
 
Abstracts



Michel Schmitt
Schema Booleens et applications

Le schema booleen est un modele issu de la geometrie stochastique qui permet de representer une superposition aleatoire d'objets independants. Nous regarderons plus particulierement
- l'inference des parametres du modele, selon differentes hypotheses sur les objets (connectivite, convexite, bornitude)
- la simulation non conditionnelle et conditionnelle par des processus de naissance et de mort,
- l'application de ces modeles a la caracterisation de reservoir petroliers sous terrains.



Ercan E. Kuruoglu
Modelling SAR Images with a Generalisation of the Rayleigh Distribution

In this talk, I will present results of the work we have done with Josiane Zerubia during my stay in INRIA as an ERCIM postdoctoral fellow. Speckle noise, which is caused by the coherent addition of out-of-phase reflections, is one of the main performance limiting factors in Synthetic Aperture Radar (SAR) imagery.
Efficient statistical modeling of SAR images is a prerequisite for developing successful speckle cancellation techniques. Traditionally, due to the central limit theorem, it has been assumed that the amplitude image is distributed with a Rayleigh law. However, some experimental data does not follow Rayleigh law. The alternative empirical models that have been suggested in the literature are either generally empirical and do not have strong theoretical justification or are computationally expensive. In this talk, we develop a generalised version of the Rayleigh distribution based on the assumption that the real and imaginary parts of the received signal follow an isotropic alpha-stable law. We also present novel estimation methods based on negative order statistics for model fitting. Our experimental results show that the new model can describe a wide range of data (in particular urban area images) which could not be described by the classical Rayleigh model or other alternative models.
Time remaining, we will also introduce skewed stable fields for texture modelling and will present some results on segmentation of images.



Albert Bijaoui
Méthodes multiéchelles de restauration des images

Les images du ciel montrent le plus souvent des structures diffuses hiérarchiquement organisées. De ce fait, la transformation en ondelettes est apparue comme particulièrement bien adaptée à leur compression. Cette capacité de concentrer l'information utile dans un faible nombre de coefficients a conduit à exploiter la transformation en ondelettes pour le débruitage et la déconvolution des images astronomiques.
Afin d'éviter les effets de repliement spectral lié au sous-échantillonnage de la transformée discrète, nous avons exploité les propriétés de l'algorithme à trous. La stratégie générale est basée sur la sélection des coefficients statistiquement différents de 0. Ce choix dépendant de la nature du bruit, nous avons examiné les cas de bruits gaussiens, poissonniens, à faible et haut niveau d'événements, de composition gaussien et poissonnien, de Rayleigh et exponentiel. Une fois les coefficients sélectionnés nous avons proposé plusieurs stratégies de reconstruction : une atténuation sélective des coefficients, un processus itératif basé sur la notion de résidu significatif et l'introduction d'une contrainte de régularisation. Après une présentation des différents aspects de ces méthodes, nous montrerons différentes applications en astronomie, en télédétection et en imagerie médicale.



Mihai Datcu
ClustFuse: New Bayesian learning and data grouping method
with applications to image information content exploration

Information technology develops very fast, the role of image communication, exploration of distributed pictures archives, internet search engines, visualization, or other related information technologies, is very much increasing.
Data grouping is one of the most important methodology for data content exploration. However data clustering is still an open field, many methods have been explored but the possible complexity of the data structures can be very high, thus making difficult to find general solutions.
The presentation focuses on the Bayesian methods for grouping applied to explore patterns in heterogeneous data, like image archives.
After a short overview a new method is introduced. The method is based on a hierarchical Bayesian modeling of the information content of images, and consists of two processing steps:
i) an unsupervised classification followed by an
ii) interactive learning procedure.
The learning step is implemented using a Bayesian network.
The method is exemplified for applications in image classification, data fusion, and image information mining.



Daniel Schlüter
Perceptual Grouping using Markov Random Fields and Cue Integration

A common feature of computer vision systems is the use of levels of increasing abstraction to represent intermediate results and thus successively bridging the gap between raw image data and the final result. To elaborate on such a hierarchical representation we propose a contour-based grouping hierarchy based on principles of perceptual organization. Exploiting regularities in the image, we aim at enhancing efficiency and robustness of subsequent processing steps of an image analysis system by reducing ambiguities in the intermediate representation and by realizing image primitives at a higher level of abstraction. To this end, first grouping hypotheses are generated within the hierarchy using local evaluation strategies, which are motivated by different Gestalt principles. Since this generation is based on local evidence only, the hypotheses have to be judged in a global context. We employ a Markov Random Field to model context dependencies and energy minimization yields a consistent interpretation of image data with groupings from the hierarchy. Since this grouping hierarchy is contour-based, it inherits from drawbacks of contour segmentation. Therefore, the second issue we address aims at the integration of cues from region segmentation into the contour-based grouping process and vice versa. This integration is done on the level of grouping hypotheses with complete regions to support and constrain each other. Finally, using this perceptual grouping scheme and the integration concept of contour grouping and region information enhance object recognition processes.



Sebastiano B. Serpico
Unsupervised Detection of Changes in Multitemporal Remote Sensing Images

For several applications (such as disaster management and assessment of land erosion, deforestation, urban growth, and crop development) the contribution of remote sensing images can be valuable in particular by the use of techniques able to reveal the changes that have occurred in a given study area. The difficulty to collect ground truth information regularly in the time makes important to develop unsupervised change detection techniques to help in the analysis of temporal sequences of remote sensing images.
An unsupervised change detection problem can be viewed as a classification problem with only two classes corresponding to the change and no-change areas, respectively. A possible approach to solve this problem consists of the computation of the difference between the multiband images acquired at two different times, followed by the application of a simple thresholding procedure to the length of the difference vector (computed on a pixel basis). However, the selection of the best threshold value is not a trivial problem.
In the seminar, several thresholding methods will be considered and compared. The minimum-error thresholding method [1] suggests to compute the threshold that optimize the average pixel classification error assuming that the difference image histogram derives from a mixture of two normal components (corresponding to change and no-change pixels). The second considered method [2] starts from the same assumption of normal mixture, but is based on an iterative estimation of the parameters of the two gaussian components by means of the Expectation-Maximization algorithm (EM); the Bayes rule for minimum error is then applied to select the decision threshold. The third considered method [3] makes use of fuzzy membership functions and of an entropy measure as a criterion function for optimal threshold selection. Finally, a recent investigation is proposed which is based on two approaches: the former consists of the application of thresholding to data after projecting them along the Fisher direction; the latter is based on the application of EM to estimate the parameters of the two classes, under the gaussian distribution hypothesis, directly in the multidimensional space of the multiband difference image. For experimental purposes, two Landsat TM images of an area affected by a forest fire are considered, which were acquired before and after the event. The capability to reveal the burned zones is utilized for comparing the above mentioned unsupervised change detection methods.

[1] J. Kittler and J. Illingworth, "Minimum error thresholding", Pattern Recognition, vol. 19, pp. 41-47,1986.
[2] L. Bruzzone and D. F. Prieto, "Automatic analysis of the difference image for unsupervised change detection", IEEE Trans. on Geoscience and Remote Sensing, vol. 38, pp. 1171-1182, 2000.
[3] L. K. Huang and M. J. Wang, "Image thresholding by minimizing the measures of fuzziness", Pattern Recognition, vol. 28, pp. 41-51, 1995.



Eric D. Kolaczyk
Multiscale Probability Models -- Blending Wavelets, Recursive Partitioning, and Graphical Models

Wavelets, recursive partitioning, and graphical models represent three frameworks for modeling signals and images that have proven to be highly successful in a variety of application areas. Each of these frameworks has certain strengths and weaknesses, often complementary among them. In this talk I will present a framework for a certain class of multiscale probability models that simultaneously shares characteristics of all three of these frameworks. These models are grounded upon a common factorized form for the data likelihood into a product of components with localized information in position and scale -- similar to a wavelet decomposition.
In fact, such factorizations can be linked formally with a probabilistic analogue of a multiresolution analysis. Efficient algorithms for estimation and segmentation can be built upon these factorizations, with direct parallels to thresholding, partitioning, and probability propagation algorithms in the existing literature. Finally, estimators, deriving from this framework can be shown to have `near-optimality' properties similar to those now-classical results established for wavelets in the Gaussian signal-plus-noise model, but for other distributions as well, such as Poisson and multinomial measurements. Fundamental to our framework is a quite general notion of recursive partitioning in a data space. If time allows, I will demonstrate how this allows for the same core modeling and algorithmic structures to be used in examples from areas as diverse as high-energy astrophysics and census geography.



Frederic Truchetet
Image magnification with multiresolution analysis

An image magnification algorithm based on the decimated wavelet transform is presented. The magnification is based on the conservation of the visual smoothness. Experimental results are given and show that the algorithm is robust. The conservation of the visual aspect is validated by the computation of a perceptual criteria.



Jean-Michel Morel
L'analyse sans paramètres des images digitales

J'aborderai un enjeu central pour les applications du traitement d'images : l'automatisation, et les algorithmes sans paramètres. Selon l'école gestaliste, notre perception est capable d'établir une description géométrique de chaque image qui série les formes (Gestalt) les plus visibles et établit une organisation hiérarchique de ces formes en tout et parties. Les tentatives issues de l'analyse d'images (segmentation, détection de bords, classification, etc.) dépendent pourtant de plusieurs paramètres laissés à l'opérateur humain et ne sont pas automatisables. Une autre grande différence entre la phénoménologie de la perception et l'algorithmique actuelle est la suivante : les modèles d'analyse d'image sont globaux. Ils sont décrits par des fonctionnelles d'énergie déterministes, énergies de Gibbs, formulations bayésiennes et tendent à une explication globale de l'image. Ils présupposent de plus un modèle a priori de type probabiliste ou déterministe. Tout semble pourtant indiquer que notre perception ne construit une "explication globale" qu'en dernier ressort, les perceptions de Gestalt étant d'abord spécialisées et partielles. La neurophysiologie corrobore ce dernier aspect. De plus, rien n'indique l'existence physiologique ou phénoménologique de modèles a priori quantitatifs alors que les travaux des gestaltistes montrent l'existence de modèles qualitatifs. Je décrirai une méthodologie pour analyser les images digitales issue de travaux d'Agnès Desolneux et Lionel Moisan. Du point de vue applicatif, la grande nouveauté de cette méthode est la mise au point d'algorithmes d'analyse d'images sans paramètre pour calculer des "gestalts partielles" à partir d'hypothèses qualitatives. Je comparerai quand c'est possible les résultats de ces algorithmes avec ceux des algorithmes variationnels et je donnerai une idée des recherches futures.



Pierre Charbonnier
Image analysis for road scene analysis

In this talk, we focus on the analysis of road scene. These are sequences of numerical images of the road and its immediate environment, taken by vehicles, at a typical rate of one image every five meters. They constitute huge image databases that become an increasingly demanded tool for road managers. Beyond simple visual interpretation, our goal is to extract information that can be useful in safety studies or for road management. Some tasks such as objects-of-interest location are still performed by human operators and need to be at least partially automated. The goal is to detect and recognize manufactured objects like road signs, safety elements, poles, and natural objects like trees.
In a pre-detection step, we propose to analyze the apparent motion of objects throughout the sequence. The camera motion being roughly a forward travelling, objects appear, grow and then disappear from the scene, while the background nearly remains unchanged. Considering an appropriate choice of image features it is possible to extract the statistics of changing objects from pairs of successive histograms. Areas of interest involving the objects are then defined using histogram back-projection techniques. The method is unsupervised and does not require any motion estimation.
In the second step, we aim to refine the detection and then to perform object recognition, using statistical appearance-based techniques. These are not robust to outliers that occur due to occlusions or cluttered background. The robust approach we propose uses M-estimators in continuation. Using results from robust estimation and half-quadratic theory, we come up with a simple least-squares (with modified residuals) algorithm. This scheme does not require any user interaction provided all necessary parameters have previously been estimated during the learning step.
We show experimental results on both synthetic images and real-world image sequences. As a conclusion, we describe some open issues, such as the automatic detection of pavement defects.
Keywords: road scene analysis, content-based image retrieval, image sequence analysis, histogram back-projection, statistical learning, robust estimation, half-quadratic theory.



Hassan Foroosh
From Images to Virtual Cities

In this talk I will present some of my recent work on building 3D models of cities/urban areas using aerial images. The talk consistes of three main sections:
- projective rectification
- Bayesian stereo
- reduction of perspective distortions

I will introduce a new method for rectifying stereo pairs that does not require any calibration information or any knowledge of the epipolar geometry. I then describe a Bayesian stereo technique that fuses information from both monocular and binocular vision in order to overcome the complexity of data in cluttered/dense urban areas. Finally I describe a model for reducing perspective distortions, which otherwise could introduce severe errors in the actual 3D models.
Time permitting, I will also give a quick overview of a multi-sensor platform that we have built at UC Berkeley for acquisition and construction of 3D city models at close range, which we plan to fuse with far-range stereo models.



Robert Nowak
Multiscale Likelihood Analysis and Inverse Problems in Imaging

Inverse reconstruction problems arise routinely in imaging applications. Reconstructing medical images from projection data and deblurring satellite imagery are two classical examples of linear inverse problems that I will discuss in this talk. Wavelet and multiscale analysis methods have resulted in major advances in image compression and denoising, but are more difficult to apply in inverse problems. Most wavelet-based approaches to linear inverse problems can be roughly categorized as: (1) linear inverse filtering followed by nonlinear denoising; (2) nonlinear wavelet denoising followed by linear inverse filtering. Unfortunately, in general neither of these approaches is entirely satisfactory. The noise is usually most easily modeled in the original observation (e.g., projection data), whereas the image is most naturally modeled in the reconstruction domain. Linear inverse filtering can lead to a very complicated noise in the reconstruction domain, and image structure can be very difficult to model in the observation domain.
This talk describes a multiscale framework for image reconstruction that combines the "best of both worlds". The noise is modeled in the observation domain and the image is represented with wavelets (or analogous multiscale representation) in the reconstruction domain. Expectation-Maximization algorithms iterate between observation and reconstruction domains to take full advantage of the noise and image structure, respectively. The theoretical underpinnings of this new approach include "multiscale likelihood factorizations," which generalizes the notion of wavelet analysis to a broad class of noise models (including Poisson), and penalized likelihood estimation methods. I will describe the theory of multiscale likelihood factorizations, then show how these factorizations can be put to use in inverse imaging problems. Applications to medical tomography and satellite imaging will be presented.



Robin Morris
3-D Super-Resolution

The conventional approach to shape from stereo is via feature extraction and correspondences. This results in estimates of the camera parameters and a typically sparse estimate of the surface.
Given a set of calibrated images, a {\em dense} surface reconstruction is possible by minimizing the error between the observed image and the image rendered from the estimated surface with respect to the surface model parameters.
Given an uncalibrated image and an estimated surface, the camera parameters can be estimated by minimizing the error between the observed and rendered images as a function of the camera parameters.
We use a {\em very small} set of matched features to provide camera parameter estimates for the {\em initial} dense surface estimate. We then re-estimate the camera parameters as described above, and then re-estimate the surface. This process is iterated. Whilst it can not be proven to converge, we have found that around three iterations results in excellent surface and camera parameter estimates.
Joint work with Peter Cheeseman, Vadim Smelyanskiy



Charles Kervrann
Courbes de Niveau Optimales et Minimisation de Fonctions d'Energie en Segmentation d'Images

L'analyse bayésienne et l'analyse variationnelle d'images procurent des solutions élégantes au problème de la segmentation d'images. Elles se traduisent, en définitive, par la formulation d'un problème de minimisation globale d'une fonctionnelle qui se décompose généralement en deux termes distincts : un premier terme de contraste mesurant la qualité de l'approximation et un second terme de régularité favorisant l'émergence de régions délimitées par des bords réguliers. La mise en oeuvre des méthodes bayésiennes et variationnelles peut cependant s'avérer délicate en raison de la charge calculatoire nécessaire pour mener la minimisation globale des énergies non-convexes.
Ceci nous a conduit à étudier des fonctionnelles d'énergie plus simples que les modèles habituellement préconisées. Elles se déclinent par le choix de termes de régularisation contrôlant surtout l'aire des objets à construire, adjoints à un modèle de log-vraisemblance des observation s, sous une hypothèse gaussienne. Pour justifier l'emploi de ces termes d'énergie, on peut faire valoir que les courbes qui minimisent les fonctionnelles sous-jacentes, se superposent à des lignes de niveau de l'image, en nombre restreint et inconnu. Ces lignes de niveau, qui ont acquis un statut d'objet principal en imagerie et en morphologie mathématique, sont équivalentes aux frontières des composantes connexes des ensembles de niveau de l'image. les modèles de régularité valides pour cette analyse variationnelle, peuvent être rattachés soit à la classe des modèles markoviens sur des composantes connexes de l'image. Ils ne permettent pas cependant de contrôler la régularité des frontières des régions, souvent désirée dans un contexte de reconnaissance de forme.
L'algorithme complet de segmentation réalise ici une sélection non-itérative d'un sous-ensemble de lignes de niveau, directement reliée au minimum global de l'énergie de segmentation. D'un point de vue pratique, un critère simple d'entropie permet d'obtenir des résultats encourageants pour construire les ensembles de niveau. Devant les difficultés rencontrées pour explorer toutes les partitions possibles construites par assemblage des composantes connexes répertoriées, parfois trop nombreuses, un algorithme d'optimisation déterministe est proposé pour faciliter l'exploration des configurations. Cet algorithme présente l'avantage pratique de pouvoir être implanté de façon économe en temps de calcul mais ne peut pas assurer la production systématique du minimum global. Des résultats sur des images 2-D et 3D de microscopie confocale en biologie et sur des images médicales IRM et météorologiques démontrent l'aptitude de l'approche à produire des segmentations satisfaisantes avec peu d'objets. Les avantages et les limitations/inconvenients de l'approche par rapport aux méthodes classiques seront discutees.



Anil Kokaram
On missing data in pictures

The treatment of missing data is a common theme in much of the work on communications systems, where the solution is normally called "error concealment". The problem also exists in archived pictures, video and film and in this domain the solution is called "image restoration" or "image reconstruction". In the latter case, missing data is caused by the physical degradation of the film medium e.g. blotches (Dirt and Sparkle) or line scratches, or physical deterioration or manipulation of the photograph (abrasion, creases, folds). Whatever the domain, the task is to automatically detect the locations at which data is missing and then to interpolate convincing image material in these regions.
In recent years, the rise of Digital Television and DVD has put a great demand on the holders of both film and television archives to provide content for these digital media. In order to exploit their 'content' broadcasters and archives need to restore the images to a quality suitable for the consumer. Simultaneously with this is the rise of low bandwidth image communications and the whole aspect of error resilience in the wireless domain. This problem is exacerbated by the heavy image compression needed in such environments. Although, these are quite different systems the fundamental problem of missing data remains.
This talk takes a 'holistic' view of the effect of missing data in different systems e.g. defects in still images, error resilience for video over wireless and artefacts in archived film and video; and shows how basic, underlying ideas can apply to many of these circumstances. The unifying viewpoint is of course, Bayesian.



Robert M. Gray
Gauss Mixture Vector Quatization for Classification and Compression

Gauss mixtures have gained popularity in statistics and statistical signal processing applications for a variety of reasons, including their ability to well approximate a large class of interesting densities and the availability of algorithms such as EM for constructing the models based on observed data. In this talk a somewhat different justification and application is described arising from the ``worst case'' role of the Gaussian source in rate-distortion theory [Sakrison, Lapidoth].
The talk will sketch ongoing work on a robust joint approach to modeling, compression code design, and classifier design that draws on ideas from Lloyd clustering, minimum discrimination information density estimation, Gauss mixture models, asymptotic (Bennett/Zador/Gersho) vector quantization approximations, and universal source coding. At this stage there are more conjectures than theorems or supporting results, but preliminary results do point out that Gauss mixture models can play a useful role in coding and signal processing problems for information sources that are not well modeled by Gaussian or simple Gauss mixture densities. The approach also provides a natural extension to image compression and segmentation of the minimimum discrimination interpretation LPC and CELP speech coding.



Maurizio Falcone
An Adaptative Scheme for the Solution of the Eikonal Equation and Applications

The eikonal equation has a relevant importance in many applications including optimal control, front propagation, geometrical optics and image processing. It is also a prototype for Hamilton-Jacobi equations H(Du)=0 with convex hamiltonians since it exhibits several pathologies related to the lack of regularity of its solutions and to the existence of multiple a.e. solutions.
We present an adaptative scheme to solve the eikonal equation abs(Du(x))=f(x) for x in a boudned open domain Omega of R^n.
We compute the numerical solution using a semi-Lagrangian (SL) scheme on unstructured grids ([SS]). The scheme is based on a two step discretization, first we integrate along the characteristic with step h, then we project on a regular triangulation Sigma of the domain Omega. Let Delta be the set of eht discretization parameters, we look for an approximate solution w related to the residual. This error estimate is used as refinement indicator. However, we also investigate another possibility wich is strictly related to the Shape-from-Shading (SFS) problem with vertical light source. In this case, we obtain a different a-posteriori error estimate based ont the difference between the original image intensity I and the image intensity I' corresponding to the approximate solution. Finally, we will discuss the results on some numerical tests.



Fabrice Heitz
Recalage et segmentation d'images médicales volumiques : contraintes statistiques et topologiques

En imagerie médicale, les approches intégrant des modèles de haut niveau (atlas anatomiques ou fonctionnels), conduisent à une étroite imbrication des problèmes de recalage et de segmentation des images volumiques. Cet exposé présente les travaux en cours sur ce thème au LSIIT, en imagerie du cerveau, en collaboration avec l'Institut de Physique Biologique (CNRS, Hôpitaux Universitaires de Strasbourg). Différentes approches de bas-niveau pour le recalage rigide ou déformable d'images volumiques multimodales (IRM / TEMP), intégrant contraintes statistiques et/ou topologiques sont tout d'abord décrites. L'utilisation de ces approches, en association avec des atlas anatomiques déterministes ou probabilistes, permet d'apporter une réponse robuste au problème de la segmentation des images. Nous présentons enfin une application des outils développés, au diagnostic de l'épilepsie, dans le cadre d'un protocole utilisé actuellement en routine clinique au CHU de Strasbourg.



Julien Senegas
Une approche MCMC en vision stéréoscopique
Le problème qui nous intéresse est le suivant: comment peut-on à partir du couple stéréoscopique seulement quantifier l'incertitude qui est associée au calcul de la disparité?

Cette question est de grande importance lorsqu'il s'agit d'optimiser le mouvement d'un objet dans un environnement tri-dimensionnel tout en évitant les collisions. La qualité de l'estimation de la disparité dépend fortement de la nature de l'information contenue dans le couple stéréoscopique. Conditionnellement à la donnée de ce couple, la disparité possède ainsi une variabilité qui, dans un cadre bayésien, peut être calculée par le biais de sa loi a posteriori connaissant le couple stéréoscopique. Dans ce contexte, on peut donc estimer cette loi à l'aide de simulations, et calculer la probabilité de tout événement par des méthodes de Monte-Carlo.
D'un point de vue pratique, deux problèmes se posent: la spécification du modèle stochastique, et notamment le choix de la structure spatiale, et le choix d'un algorithme d'échantillonnage. Le modèle que nous utilisons est orienté vers une application à des systèmes stéréoscopiques de moyenne résolution, mais nous donnons quelques indications pour généraliser l'approche.
Pour les simulations, nous proposons un nouvel algorithme d'échantillonnage par chaîne de Markov dans le cas de modèles a priori gaussiens. L'utilisation de techniques d'échantillonnage d'importance permet de plus de réduire considérablement le temps de calcul. Ces méthodes sont appliquées à l'étude d'un couple stéréoscopique d'images SPOT: calcul de cartes de probabilité d'erreurs et d'intervalles de confiance pour la disparité.