|
|
|
|
|
|
|
Directeur du Laboratoire Géostatistiques Ecole des Mines de Paris |
22 janvier | 10:30 | Salle 003 |
|
ERCIM FELLOW Istituto di Elaborazione della Informazione-CNR, Pise, Italie |
26 janvier | 10:30 | Salle 003 |
|
Departement CERGA Observatoire de la Cote d'Azur Nice |
12 février | 10:30 | Salle 003 |
|
German Aerospace Center - DLR Remote Sensing Technology Institute - IMF - Germany |
12 mars | 10:30 | Salle 006 |
|
Technical Faculty Bielefeld University - Germany |
23 avril | 10:30 | Salle 003 |
|
Dept of Biophysical and Electronic Engineering University of Genoa - Italy |
14 mai | 10:30 | Salle 006 |
|
Department of Mathematics and Statistics Boston University (USA) |
8 juin | 14:30 | Salle 003 |
|
Université de Bourgogne - Le2i Laboratoire Génie Electrique et Informatique Industrielle |
11 juin | 14:30 | Salle 003 |
|
CMLA, ENS Cachan |
18 juin | 10:30 | Salle 006 |
|
Charge de Recherche LCPC, Laboratoire Regional des Ponts-et-Chaussees de Strasbourg |
9 juillet | 10:30 | Salle 006 |
|
Senior Research Scientist Dept. of EECS, University of California, Berkeley (USA) |
23 juillet | 14:30 | Salle 006 |
|
Prof. invite Ariana Assistant Prof. Rice University, Dept ECE Houston (USA) |
25 juillet | 10:30 | Salle 006 |
|
NASA Ames Research Center Moffett Field,USA |
07 sept. | 10:30 | Salle 006 |
|
INRA - Biométrie Jouy-en-Josas |
17 sept. | 10:30 | Salle 003 |
|
University of Dublin Trinity College, Dublin |
20 sept. | 14:00 | Salle 003 |
|
Information Systems Lab Department of Electrical Engineering Stanford University |
22 oct. | 10:30 | Salle 006 |
|
Departement de Mathematique Universite de Rome "La Sapienza" Italie |
29 oct. | 14:00 | Salle 003 |
|
Laboratoire des Sciences de l'Image, de l'Informatique et de la Télédétection CNRS, Université Strasbourg I. |
19 nov. | 10:30 | Salle 003 |
|
Centre de Géostatistique Ecole des Mines de Paris, Fontainebleau |
17 dec. | 10:30 | Salle 006 |
Le schema booleen est un modele issu de la geometrie stochastique qui
permet de representer une superposition aleatoire d'objets
independants. Nous regarderons plus particulierement
- l'inference des parametres du modele, selon differentes hypotheses
sur les objets (connectivite, convexite, bornitude)
- la simulation non conditionnelle et conditionnelle par des
processus de naissance et de mort,
- l'application de ces modeles a la caracterisation de reservoir
petroliers sous terrains.
In this talk, I will present results of the work we have done with
Josiane Zerubia during my stay in INRIA as an ERCIM postdoctoral fellow.
Speckle noise, which is caused by the coherent addition of out-of-phase
reflections, is one of the main performance limiting factors in
Synthetic Aperture Radar (SAR) imagery.
Efficient statistical modeling of SAR images is a prerequisite for
developing successful speckle cancellation techniques. Traditionally,
due to the central limit theorem, it has been assumed that the amplitude
image is distributed with a Rayleigh law. However, some experimental
data does not follow Rayleigh law. The alternative empirical models that
have
been suggested in the literature are either generally empirical and do
not have strong theoretical justification or are computationally
expensive. In this talk, we develop a generalised version of the
Rayleigh distribution based on the assumption that the real and
imaginary parts of the received signal follow an isotropic alpha-stable
law. We also present novel estimation methods based on negative order
statistics for model fitting. Our experimental results show that the new
model can describe a wide range of data (in particular urban area
images) which could not be described by the classical Rayleigh model or
other alternative models.
Time remaining, we will also introduce skewed stable fields for texture
modelling and will present some results on segmentation of images.
Les images du ciel montrent le plus souvent des structures diffuses
hiérarchiquement organisées. De ce fait, la transformation en
ondelettes est apparue comme particulièrement bien adaptée à leur
compression. Cette capacité de concentrer l'information utile dans un
faible nombre de coefficients a conduit à exploiter la transformation en
ondelettes pour le débruitage et la déconvolution des images
astronomiques.
Afin d'éviter les effets de repliement spectral lié au
sous-échantillonnage de la transformée discrète, nous avons exploité les
propriétés de l'algorithme à trous. La stratégie générale est basée sur
la sélection des coefficients statistiquement différents de 0. Ce choix
dépendant de la nature du bruit, nous avons examiné les cas de bruits
gaussiens, poissonniens, à faible et haut niveau d'événements, de
composition gaussien et poissonnien, de Rayleigh et exponentiel. Une
fois les coefficients sélectionnés nous avons proposé plusieurs
stratégies de reconstruction : une atténuation sélective des
coefficients, un processus itératif basé sur la notion de résidu
significatif et l'introduction d'une contrainte de régularisation. Après
une présentation des différents aspects de ces méthodes, nous montrerons
différentes applications en astronomie, en télédétection et en imagerie
médicale.
Information technology develops very fast, the role of image
communication, exploration of distributed pictures archives, internet search engines,
visualization, or other related information technologies, is very much
increasing.
Data grouping is one of the most important methodology for data content
exploration.
However data clustering is still an open field, many methods have been
explored but the possible complexity of the data structures can be very
high, thus making difficult to find general solutions.
The presentation focuses on the Bayesian methods for grouping applied to
explore patterns in heterogeneous data, like image archives.
After a short overview a new method is introduced. The method is based
on a hierarchical
Bayesian modeling of the information content of images, and consists of
two processing steps:
i) an unsupervised classification followed by an
ii) interactive
learning procedure.
The learning step is implemented using a Bayesian network.
The method is exemplified for applications in image classification,
data fusion, and image information mining.
A common feature of computer vision systems is the use of
levels of increasing abstraction to represent intermediate results
and thus successively bridging the gap between raw image data
and the final result. To elaborate on such a hierarchical
representation we propose a contour-based grouping hierarchy
based on principles of perceptual organization.
Exploiting regularities in the image, we aim at enhancing
efficiency and robustness of subsequent processing steps of an
image analysis system by reducing ambiguities in the intermediate
representation and by realizing image primitives at a higher level
of abstraction. To this end, first grouping hypotheses are generated
within the hierarchy using local evaluation strategies, which are
motivated by different Gestalt principles. Since this generation
is based on local evidence only, the hypotheses have to be judged
in a global context. We employ a Markov Random Field to model
context dependencies and energy minimization yields a consistent
interpretation of image data with groupings from the hierarchy.
Since this grouping hierarchy is contour-based, it inherits from
drawbacks of contour segmentation. Therefore, the second issue we
address aims at the integration of cues from region segmentation
into the contour-based grouping process and vice versa.
This integration is done on the level of grouping hypotheses
with complete regions to support and constrain each other.
Finally, using this perceptual grouping scheme and the integration
concept of contour grouping and region information enhance object
recognition processes.
For several applications (such as disaster management and assessment of
land erosion, deforestation, urban growth, and crop development) the
contribution of remote sensing images can be valuable in particular by
the use of techniques able to reveal the changes that have occurred in a
given study area. The difficulty to collect ground truth information
regularly in the time makes important to develop unsupervised change
detection techniques to help in the analysis of temporal sequences of
remote sensing images.
An unsupervised change detection problem can be viewed as a
classification problem with only two classes corresponding to the change
and no-change areas, respectively. A possible approach to solve this
problem consists of the computation of the difference between the
multiband images acquired at two different times, followed by the
application of a simple thresholding procedure to the length of the
difference vector (computed on a pixel basis). However, the selection of
the best threshold value is not a trivial problem.
In the seminar, several thresholding methods will be considered and
compared. The minimum-error thresholding method [1] suggests to compute
the threshold that optimize the average pixel classification error
assuming that the difference image histogram derives from a mixture of
two normal components (corresponding to change and no-change pixels).
The second considered method [2] starts from the same assumption of
normal mixture, but is based on an iterative estimation of the
parameters of the two gaussian components by means of the
Expectation-Maximization algorithm (EM); the Bayes rule for minimum
error is then applied to select the decision threshold. The third
considered method [3] makes use of fuzzy membership functions and of an
entropy measure as a criterion function for optimal threshold selection.
Finally, a recent investigation is proposed which is based on two
approaches: the former consists of the application of thresholding to
data after projecting them along the Fisher direction; the latter is
based on the application of EM to estimate the parameters of the two
classes, under the gaussian distribution hypothesis, directly in the
multidimensional space of the multiband difference image. For
experimental purposes, two Landsat TM images of an area affected by a
forest fire are considered, which were acquired before and after the
event. The capability to reveal the burned zones is utilized for
comparing the above mentioned unsupervised change detection methods.
[1] J. Kittler and J. Illingworth, "Minimum error thresholding", Pattern
Recognition, vol. 19, pp. 41-47,1986.
[2] L. Bruzzone and D. F. Prieto, "Automatic analysis of the difference
image for unsupervised change detection", IEEE Trans. on Geoscience and
Remote Sensing, vol. 38, pp. 1171-1182, 2000.
[3] L. K. Huang and M. J. Wang, "Image thresholding by minimizing the
measures of fuzziness", Pattern Recognition, vol. 28, pp. 41-51, 1995.
Wavelets, recursive partitioning, and graphical models represent three
frameworks for modeling signals and images that have proven to be highly
successful in a variety of application areas. Each of these frameworks
has certain strengths and weaknesses, often complementary among them.
In this talk I will present a framework for a certain class of multiscale
probability models that simultaneously shares characteristics of all
three of these frameworks. These models are grounded upon a common
factorized form for the data likelihood into a product of components
with localized information in position and scale -- similar to a wavelet
decomposition.
In fact, such factorizations can be linked formally with a probabilistic
analogue of a multiresolution analysis. Efficient algorithms for
estimation and segmentation can be built upon these factorizations, with
direct parallels to thresholding, partitioning, and probability
propagation algorithms in the existing literature. Finally, estimators,
deriving from this framework can be shown to have `near-optimality'
properties similar to those now-classical results established for
wavelets in the Gaussian signal-plus-noise model, but for other
distributions as well, such as Poisson and multinomial measurements.
Fundamental to our framework is a quite general notion of recursive
partitioning in a data space. If time allows, I will demonstrate how
this allows for the same core modeling and algorithmic structures to be
used in examples from areas as diverse as high-energy astrophysics and
census geography.
An image magnification algorithm based on the decimated wavelet
transform is presented. The magnification is based on the
conservation of the visual smoothness. Experimental results are given
and show that the algorithm is robust. The conservation of the visual
aspect is validated by the computation of a perceptual criteria.
J'aborderai un enjeu central pour les applications du traitement
d'images : l'automatisation, et les algorithmes sans paramètres.
Selon l'école gestaliste, notre perception est capable d'établir une
description géométrique de chaque image qui série les formes (Gestalt)
les plus visibles et établit une organisation hiérarchique de ces formes en
tout et parties. Les tentatives issues de l'analyse d'images
(segmentation, détection de bords, classification, etc.) dépendent pourtant de
plusieurs paramètres laissés à l'opérateur humain et ne sont pas automatisables.
Une autre grande différence entre la phénoménologie de la perception
et l'algorithmique actuelle est la suivante : les modèles d'analyse
d'image sont globaux. Ils sont décrits par des fonctionnelles
d'énergie déterministes, énergies de Gibbs, formulations bayésiennes et
tendent à une explication globale de l'image. Ils présupposent de plus
un modèle a priori de type probabiliste ou déterministe. Tout semble
pourtant indiquer que notre perception ne construit une "explication
globale" qu'en dernier ressort, les perceptions de Gestalt étant d'abord
spécialisées et partielles. La neurophysiologie corrobore ce dernier
aspect. De plus, rien n'indique l'existence physiologique ou
phénoménologique de modèles a priori quantitatifs alors que les travaux
des gestaltistes montrent l'existence de modèles qualitatifs.
Je décrirai une méthodologie pour analyser les images digitales
issue de travaux d'Agnès Desolneux et Lionel Moisan. Du point de vue applicatif, la
grande nouveauté de cette méthode est la mise au point d'algorithmes
d'analyse d'images sans paramètre pour calculer des "gestalts
partielles" à partir d'hypothèses qualitatives. Je comparerai quand
c'est possible les résultats de ces algorithmes
avec ceux des algorithmes variationnels et je
donnerai une idée des recherches futures.
Le sujet de cet exposé est l'analyse de séquences d'images numériques de
la route et de son environnement proche, prises par des véhicules, à
raison d'une image toutes les cinq mètres, par exemple. Les volumineuses
bases de données d'images ainsi constituées sont un outil de plus en
plus demandé par les gestionnaires du réseau routier. Au-delà de la
simple interprétation visuelle, notre but est d'extraire des
informations utiles aux études de sécurité ou à la gestion du
patrimoine. Certaines tâches, comme le relevé d'objets d'intérêt, ont
besoin d'être au moins partiellement automatisées. Il s'agit de
détecter, puis de reconnaître, des objets manufacturés tels que la
signalisation verticale, les éléments de sécurité, les poteaux, et des
objets naturels tels que les arbres.
Dans une phase de pré-détection, nous proposons d'analyser le mouvement
apparent des images dans la séquence. La caméra effectuant un travelling
avant, les objets apparaissent, grossissent puis disparaissent de la
scène, tandis que le fond varie peu. Grâce à un choix approprié de
mesures image, il est possible d'isoler les attributs statistiques des
objets changeants à partir de paires d'histogrammes successifs. En
rétroprojetant ces statistiques dans les images, on peut définir des
régions d'intérêt autour des objets. La méthode est non supervisée et ne
requiert ni estimation ni compensation du mouvement.
Dans un second temps, nous nous proposons de raffiner ces détections
puis de reconnaître les objets détectés grâce à des techniques
statistiques de représentation de l'apparence. Celles-ci ne sont pas
robustes aux données erronées souvent observées en présence d'occlusions
et de fonds texturés. L'approche robuste que nous proposons utilise des
M-estimateurs en continuation. Des résultats d'estimation robuste et de
la théorie semi-quadratique nous mènent à un algorithme simple, basé sur
les moindres carrés avec une expression modifiée des résidus. Ce schéma
n'implique aucune interaction avec l'utilisateur car tous les paramètres
sont fixés lors de l'étape d'apprentissage.
Nous montrons des résultats sur des images synthétiques et sur des
séquences d'images réelles. En guise de conclusion, nous décrivons
quelques problèmes ouverts, comme la détection automatique de
dégradations de chaussée.
Mots clefs : analyse de scènes routières, indexation d'images par le
contenu, analyse de séquences d'images, rétroprojection d'histogramme,
apprentissage statistique, estimation robuste, théorie semi-quadratique,
reconnaissance d'objets.
In this talk I will present some of my recent work on building 3D models
of cities/urban areas using aerial images. The talk consistes of three
main sections:
- projective rectification
- Bayesian stereo
- reduction of perspective distortions
I will introduce a new method for rectifying stereo pairs that does not
require any calibration information or any knowledge of the epipolar
geometry. I then describe a Bayesian stereo technique that fuses
information from both monocular and binocular vision in order to
overcome the complexity of data in cluttered/dense urban areas. Finally
I describe a model for reducing perspective distortions, which otherwise
could introduce severe errors in the actual 3D models.
Time permitting, I will also give a quick overview of a multi-sensor
platform that we have built at UC Berkeley for acquisition and
construction of 3D city models at close range, which we plan to fuse
with far-range stereo models.
Inverse reconstruction problems arise routinely in imaging
applications. Reconstructing medical images from projection data and
deblurring satellite imagery are two classical examples of linear
inverse problems that I will discuss in this talk. Wavelet and
multiscale analysis methods have resulted in major advances in image
compression and denoising, but are more difficult to apply in inverse
problems. Most wavelet-based approaches to linear inverse problems
can be roughly categorized as: (1) linear inverse filtering followed
by nonlinear denoising; (2) nonlinear wavelet denoising followed by
linear inverse filtering. Unfortunately, in general neither of these
approaches is entirely satisfactory. The noise is usually most easily
modeled in the original observation (e.g., projection data), whereas
the image is most naturally modeled in the reconstruction domain.
Linear inverse filtering can lead to a very complicated noise in the
reconstruction domain, and image structure can be very difficult to
model in the observation domain.
This talk describes a multiscale framework for image reconstruction
that combines the "best of both worlds". The noise is modeled in the
observation domain and the image is represented with wavelets (or
analogous multiscale representation) in the reconstruction domain.
Expectation-Maximization algorithms iterate between observation and
reconstruction domains to take full advantage of the noise and image
structure, respectively. The theoretical underpinnings of this new
approach include "multiscale likelihood factorizations," which
generalizes the notion of wavelet analysis to a broad class of noise
models (including Poisson), and penalized likelihood estimation
methods. I will describe the theory of multiscale likelihood
factorizations, then show how these factorizations can be put to use
in inverse imaging problems. Applications to medical tomography and
satellite imaging will be presented.
The conventional approach to shape from stereo is via feature
extraction and correspondences. This results in estimates of the
camera parameters and a typically sparse estimate of the surface.
Given a set of calibrated images, a {\em dense} surface reconstruction
is
possible by minimizing the error between the observed image and the
image
rendered from the estimated surface with respect to the surface model
parameters.
Given an uncalibrated image and an estimated surface, the camera
parameters can be estimated by minimizing the error between the
observed and rendered images as a function of the camera parameters.
We use a {\em very small} set of matched features to provide camera
parameter estimates for the {\em initial} dense surface estimate.
We then re-estimate the camera parameters as described above, and then
re-estimate the surface. This process is iterated. Whilst it can not
be proven to converge, we have found that around three iterations
results in excellent surface and camera parameter estimates.
Joint work with Peter Cheeseman, Vadim Smelyanskiy
L'analyse bayésienne et l'analyse variationnelle d'images procurent des
solutions élégantes au problème de la segmentation d'images. Elles se
traduisent, en définitive, par la formulation d'un problème de
minimisation globale d'une fonctionnelle qui se décompose généralement
en deux termes distincts : un premier terme de contraste mesurant la
qualité de l'approximation et un second terme de régularité favorisant
l'émergence de régions délimitées par des bords réguliers. La mise en
oeuvre des méthodes bayésiennes et variationnelles peut cependant
s'avérer délicate en raison de la charge calculatoire nécessaire pour
mener la minimisation globale des énergies non-convexes.
Ceci nous a conduit à étudier des fonctionnelles d'énergie plus simples
que les modèles habituellement préconisées. Elles se déclinent par le
choix de termes de régularisation contrôlant surtout l'aire des objets à
construire, adjoints à un modèle de log-vraisemblance des observation
s, sous une hypothèse gaussienne. Pour justifier l'emploi de ces termes
d'énergie, on peut faire valoir que les courbes qui minimisent les
fonctionnelles sous-jacentes, se superposent à des lignes de niveau de
l'image, en nombre restreint et inconnu. Ces lignes de niveau, qui ont
acquis un statut d'objet principal en imagerie et en morphologie
mathématique, sont équivalentes aux frontières des composantes connexes
des ensembles de niveau de l'image. les modèles de régularité valides
pour cette analyse variationnelle, peuvent être rattachés soit à la
classe des modèles markoviens sur des composantes connexes de l'image.
Ils ne permettent pas cependant de contrôler la régularité des
frontières des régions, souvent désirée dans un contexte de
reconnaissance de forme.
L'algorithme complet de segmentation réalise ici une sélection
non-itérative d'un sous-ensemble de lignes de niveau, directement reliée
au minimum global de l'énergie de segmentation. D'un point de vue
pratique, un critère simple d'entropie permet d'obtenir des résultats
encourageants pour construire les ensembles de niveau. Devant les
difficultés rencontrées pour explorer toutes les partitions possibles
construites par assemblage des composantes connexes répertoriées,
parfois trop nombreuses, un algorithme d'optimisation déterministe est
proposé pour faciliter l'exploration des configurations. Cet algorithme
présente l'avantage pratique de pouvoir être implanté de façon économe
en temps de calcul mais ne peut pas assurer la production systématique
du minimum global.
Des résultats sur des images 2-D et 3D de microscopie confocale en
biologie et sur des images médicales IRM et météorologiques démontrent
l'aptitude de l'approche à produire des segmentations satisfaisantes
avec peu d'objets. Les avantages et les limitations/inconvenients de
l'approche par rapport aux méthodes classiques seront discutees.
The treatment of missing data is a common theme in much of the work on
communications systems, where the solution is normally called "error
concealment". The problem also exists in archived pictures, video and
film and in this domain the solution is called "image restoration" or
"image reconstruction". In the latter case, missing data is caused by
the physical degradation of the film medium e.g. blotches (Dirt and
Sparkle) or line scratches, or physical deterioration or manipulation of
the photograph (abrasion, creases, folds). Whatever the domain, the task
is to automatically detect the locations at which data is missing and
then to interpolate convincing image material in these regions.
In recent years, the rise of Digital Television and DVD has put a great
demand on the holders of both film and television archives to provide
content for these digital media. In order to exploit their 'content'
broadcasters and archives need to restore the images to a quality
suitable for the consumer. Simultaneously with this is the rise of low
bandwidth image communications and the whole aspect of error resilience
in the wireless domain. This problem is exacerbated by the heavy image
compression needed in such environments. Although, these are quite
different systems the fundamental problem of missing data remains.
This talk takes a 'holistic' view of the effect of missing data in
different systems e.g. defects in still images, error resilience for
video over wireless and artefacts in archived film and video; and shows
how basic, underlying ideas can apply to many of these circumstances.
The unifying viewpoint is of course, Bayesian.
Gauss mixtures have gained popularity in statistics and statistical
signal processing applications for a variety of reasons, including their
ability to well approximate a large class of interesting densities and
the availability of algorithms such as EM for constructing the models
based on observed data. In this talk a somewhat different justification
and application is described arising from the ``worst case'' role of the
Gaussian source in rate-distortion theory [Sakrison, Lapidoth].
The talk will sketch ongoing work on a robust joint approach to
modeling,
compression code design, and classifier design that draws on ideas from
Lloyd clustering, minimum discrimination information density estimation,
Gauss mixture models, asymptotic (Bennett/Zador/Gersho) vector
quantization approximations, and universal source coding. At this stage
there are more conjectures than theorems or supporting results, but
preliminary results do point out that Gauss mixture models can play a
useful role in coding and signal processing problems for information
sources that are not well modeled by Gaussian or simple Gauss mixture
densities. The approach also provides a natural extension to image
compression and segmentation of the minimimum discrimination
interpretation LPC and CELP speech coding.
The eikonal equation has a relevant importance in many applications
including optimal control, front propagation, geometrical optics and
image processing. It is also a prototype for Hamilton-Jacobi equations
H(Du)=0 with convex hamiltonians since it exhibits several pathologies
related to the lack of regularity of its solutions and to the
existence of multiple a.e. solutions.
We present an adaptative scheme to solve the eikonal equation
abs(Du(x))=f(x) for x in a boudned open domain Omega of R^n.
We compute the numerical solution using a semi-Lagrangian (SL) scheme
on unstructured grids ([SS]). The scheme is based on a two step
discretization, first we integrate along the characteristic with step
h, then we project on a regular triangulation Sigma of the domain
Omega.
Let Delta be the set of eht discretization parameters, we look for an
approximate solution w related to the residual. This error estimate is
used as refinement indicator. However, we also investigate another
possibility wich is strictly related to the Shape-from-Shading (SFS)
problem with vertical light source. In this case, we obtain a different
a-posteriori error estimate based ont the difference between the
original image intensity I and the image intensity I' corresponding to
the approximate solution. Finally, we will discuss the results on some
numerical tests.
En imagerie médicale, les approches intégrant des modèles de
haut niveau (atlas anatomiques ou fonctionnels), conduisent à une
étroite imbrication des problèmes de recalage et de segmentation
des images volumiques.
Cet exposé présente les travaux en cours sur ce thème au LSIIT,
en imagerie du cerveau, en collaboration avec l'Institut de
Physique Biologique (CNRS, Hôpitaux Universitaires de Strasbourg).
Différentes approches de bas-niveau pour le recalage rigide ou
déformable d'images volumiques multimodales (IRM / TEMP),
intégrant contraintes statistiques et/ou topologiques sont tout
d'abord décrites. L'utilisation de ces approches, en association
avec des atlas anatomiques déterministes ou probabilistes, permet
d'apporter une réponse robuste au problème de la segmentation des
images.
Nous présentons enfin une application des outils développés, au
diagnostic de l'épilepsie, dans le cadre d'un protocole utilisé
actuellement en routine clinique au CHU de Strasbourg.