|
|
|
|
|
|
|
NASA Ames Research, USA |
06 janvier | 14:30 | Salle E006 |
|
Université Federico II de Naples Dip. Ingegneria Elettronica e Telecomunicazioni |
03 février | 10:30 | Salle E003 |
|
PhD, Swedish University of Agricultural Sciences Umeå, Suede Dept of Forest Resource Management and Geomatics |
21 février | 10:30 | Salle E003 |
|
CWI, Amsterdam |
10 mars | 14:30 | Salle E003 |
|
Dept. of Systems Design University of Waterloo, Canada |
21 mars | 10:30 | Salle E003 |
|
Dept of of Computer Science, University of Warwick, Coventry, UK |
4 avril | 10:30 | Salle E003 |
|
post-doc Ariana, Universite de Trento, Italie |
12 mai | 10:30 | Fermat jaune (F322) |
|
Statistical Image Processing Group ISTI-CNR, Pisa, ITALY |
16 mai | 14:30 | Salle E003 |
|
Communications and Remote Sensing Laboratory Unviersite Catholique de Louvain, Belgique |
2 juin | 10:30 | Salle E003 |
|
Israel Pollak Professor of Biophysics Molecular Biology of the Cell Weizmann Institute of Science, Israel |
6 juin | 14:30 | Salle E003 |
|
UNAM, Faculté d' Ingénierie, Mexico |
11 juin | 10:30 | Salle E003 |
|
University de Geneve (Suisse) |
30 juin | 10:30 | Salle E006 |
|
ECE Departments Rice University and University of Wisconsin-Madison |
03 juillet | 14:30 | Salle E003 |
|
Rice University Houston, USA |
15 juillet | 10:30 | Salle E003 |
|
CEREMADE Universite Paris IX Dauphine |
21 juillet | 10:30 | Salle E003 |
|
University of Illinois at Urbana-Champaign, USA |
12 septembre | 10:30 | Salle E003 |
|
Prof. invite Projet Ariana Ass. Prof. a l'Universite de Puerto Rico |
29 septembre | 10:30 | Salle E003 |
|
McConnell Brain Imaging Centre, Montreal Neurological Inst. McGill University, Montreal & Service Hospitalier Frédéric Joliot Dept de Recherche Médicale |
6 Octobre | 10:30 | Coriolis |
|
Computer Science Department University of California, Los Angeles USA |
20 Octobre | 10:30 | E003 |
|
Directeur du LASTI Université de Rennes |
17 Novembre | 10:30 | E006 |
|
LAGA Université Paris 13 |
15 Decembre | 10:30 | E003 |
|
Department of Statistics Trinity College Dublin |
19 Decembre | 10:30 | E003 |
Autres Séminaires prévus |
La reconstruction dense de surfaces 3D a partir d'images 2D est un
probleme particulierement mal pose. Notre equipe a choisi d'utiliser une
approche bayesienne. Cela permet d'introduire des contraintes sur
l'objet a reconstruire, afin de stabiliser la solution, au moyen d'un
modele
statistique de la surface.
Nous presentons d'abord un modele possible de surface, qui peut etre
utilise pour decrire des asteroides, aussi bien que pour modeliser des
surfaces planetaires. Le support consiste en un reseau triangulaire de
topologie spherique, appele surface subdivisee, car il est obtenu par
subdivision recursive d'un polyedre initial. Sur ce support nous
definissons un modele statistique, qui decrit aussi bien la geometrie de
l'objet que
sa reflectance. En introduisant une transformee en ondelettes sur le
reseau subdivise, nous construisons ainsi un modele multiechelle qui
permet
de prendre en compte le comportement fractal des surfaces naturelles.
Ensuite, nous presentons une nouvelle methode de rendu recemment
developpee, qui permet de generer des images a partir d'une surface 3D.
Elle est nettement plus precise que les algorithmes de rendu existants,
tout en tenant compte des occlusions et des ombres. Elle genere
egalement, pour chaque pixel, les derivees de l'intensite par rapport a
tous les parametres. Cette technique, couplee au modele multiechelle,
permet en principe de reconstruire la surface inconnue a partir de
plusieurs observations prises par differentes cameras. Nous presenterons
notre contribution par rapport aux travaux deja effectues dans l'equipe
en matiere de reconstruction de terrain 3D.
We present a new image segmentation algorithm based on a
tree-structured binary MRF model. The image is recursively
segmented in smaller and smaller regions until a stopping
condition, local to each region, is met. Each elementary binary
segmentation is obtained as the solution of a MAP estimation problem,
with the region prior modeled as a MRF.
Since only binary fields are used, and thanks to the tree structure,
the algorithm is quite fast, and allows one to address the cluster
validation problem in a seamless way.
In addition, all field parameters are estimated locally, allowing for
some spatial adaptivity.
To improve segmentation accuracy, a split-and-merge procedure is also
developed and a spatially adaptive MRF model is used.
Moreover, a recent implementation, that allows to define different and
independent binary fields on unconnected region of the same class,
is presented.
This last solution presents improved capacity of detail description,
while no additional computing time is required.
The seminar will consist of two parts. I will start with presenting a
method for extracting small tracks from the remotely sensed imagery. The
method is Bayesian and the MAP estimate is sought by the Gibbs sampler
coupled with simulated annealing. Second half of the seminar will be
devoted to the results of evaluation of the Gibbs sampler ability to
improve the quality of the initial classification (by QDA or ICM) of the
multispectral noisy images.
Discrete Wavelet transforms are usually realised using the filterbank
framework. The polyphase matrix of such filterbanks can be factored into
lifting steps, to achieve the lifting framework of wavelet transforms.
The lifting framework provides an useful and flexible tool for
constructing
new wavelets from the existing ones. In this talk, we present the
designing of
spatially adaptive wavelet transforms using the lifting framework. The
high
pass and low pass filters of the resulting in wavelet transform are
chosen
spatially adaptively by considering the statistics of the underlying
signal. The perfect reconstruction can be achieved without coding any
side
information regarding to the filter selection. The performance of such
transforms in lossless, lossy and scalable image and video coding is
presented.
This talk concerns two problems of continuous-parameter annealing:
large-scale pixellated models, and small-state parameterized models
(such as Markov point processes).
The estimation of large-scale images from sparse and/or noisy data is
highly-developed, however although estimates are optimum under some
criterion, they do not represent a typical or representative sample of
the system being studied, which may be desired for purposes of
visualization, further analysis, Monte Carlo studies etc. Instead,
what is required is that we find a random sample from the posterior
distribution, a much more subtle and difficult problem than
estimation, and, crucially, one which cannot be formulated as an
optimization problem. I will present a little-known property of
multiscale statistical models to formulate a posterior sampler, exact
in the case of Gauss-Markov random fields, and approximate for other
distributions.
For parameterized models, widely-scattered problems such as formant
tracking, boundary estimation and phase-unwrapping can all be
approached as the annealed minimization of continuous parameters. In
virtually all such annealing problems the Metropolis sampler is used,
where the effectiveness of the annealing is highly-dependent on a
user-formulated query function. I will propose an alternative - to
use the Gibbs sampler, which requires no query, but which requires the
difficult sampling of nonparametric, multimodal distributions.
Image segmentation is the art of describing image data, which tend to be
highly complex, in terms of simpler entities, such as regions of
homogeneous gray level, colour or texture. This amounts to attaching an
integer label to each pixel in an image, representing its class.
In the last decade or so, it has become widely accepted that the problem
can be formalised as a maximisation of a posteriori probability, based
on
a stochastic image model, such as a Markov Random Field (MRF). While
this
puts segmentation on a firm footing, it raises a significant issue in
terms
of computation: how can one possibly maximise the posterior probability
over the huge number of possible image segmentations, given a set of
data?
In recent years, two methods have found widespread use: stochastic
simulation samples from the posterior distribution of the image model;
multiresolution methods exploit the self-similarity of image data to
solve
a sequence of successively finer approximations to the problem. It has
occurred to a number of authors that these two approaches might be
usefully
combined in a multiresolution MRF. The work we have done using these
models
has thrown up interesting results in both the theory and practice
of image segmentation.
In the talk, I will examine the segmentation problem and present some of
our results.
One of the major problems in geographical information systems (GISs)
consists in defining strategies and procedures for a regular updating of
land-cover maps stored in the system databases. This crucial task can be
carried out by using remote-sensing images regularly acquired by
space-born sensors in the specific investigated areas. Such images can
be analysed with automatic classification techniques in order to derive
updated land-cover maps. However, at the operating level, such
techniques are usually based on supervised classification algorithms.
Consequently, they require the availability of ground truth information
for the training of the classifiers. Unfortunately, in many real cases,
it is not possible to rely on training data for all the images necessary
to ensure an updating of land-cover maps that is as frequent as required
by applications.
In this seminar, advanced classification techniques for a regular
updating of land-cover maps are proposed that are based on the use of
multitemporal remote sensing images. Such techniques are developed
within the framework of partially supervised Bayesian approaches and are
able to address the updating problem under the realistic but critical
constraint that, for the image to be classified (i.e., the most recent
of the considered multitemporal dataset) no ground truth information is
available. Two different approaches are considered. The first approach
is based on an independent analysis of the information contained in each
single image of the considered multitemporal series; the second approach
exploits the temporal correlation between pairs of images acquired at
different times in the classification process. In the context of these
approaches, both parametric and non-parametric classifiers are
considered. In addition, in order to design a reliable and accurate
classification system, multiple classifier architectures composed of
partially supervised algorithms are investigated.
Experimental results obtained on a real multitemporal and multisource
dataset are presented that confirm the effectiveness of the proposed
system.
Since the discovery of the cosmic microwave background (CMB) radiation
in 1965 by Penzias and Wilson, a number of missions have been planned to
measure the CMB
including the ESA satellite PLANCK. CMB radiation is of interest from a
number of aspects: 1) it is the most important evidence for the hot
big-bang model, 2) it provides us a
picture of the universe in its very early moments 3) the anisotropies in
it provide the seed map of the universe of today 4) It provides us
information about the fundamental
constants of the universe. Unfortunately, the task of measuring CMB is
not an easy one, since radiation measurements of the sky contain
contributions from various sources
from our galaxy such as syncrotron, galactic dust and free-free emission
as well as extragalactic radio sources. In this talk, I will present our
efforts at ISTI-CNR to separate
various radiation sources in astronomy images. We adopt a source
separation rather than a noise elimination approach since we value the
information in other sources as well. I
will start with our work using independent component analysis (ICA) as a
fully blind technique and then will move into more informed techniques
such as independent factor
analysis (IFA) which assume a generic source model and a full Bayesian
approach using MCMC. Talk will end with the description of the future
activities we plan.
Le tatouage des images consiste à insérer de manière secrète et
indélébile de l'information d'indexation ou de protection contre les
copies
au sein du signal. Dans un certain nombre de situations, le signal à
protéger subit des transformations géométriques importantes (par
exemple, une
version de l'image imprimée sur du papier, ou une image projetée sur un
écran dans une salle de cinéma). Nous établirons un état de l'art sur
les
méthodes permettant de retrouver l'information de tatouage malgré ces
déformations. Nous exposerons des nouvelles techniques originales basées
sur le lien entre l'information de tatouage et des caractéristiques
essentielles du signal, invariantes lors des transformations
géométriques.
The light microscope is unique in its ability to image cells and display
morphology and molecular localization at sub-cellular resolutions. As
such, it served biological research for centuries, based on human
understanding of microscopic scenes. Digital imaging added the
quantitative capability, mainly based on fluorescent immunostaining,
which enabled to study molecular localization and displacements
inflicted by various experimental manipulations and drugs. Today
modified cDNA allows to express in cells inherently fluorescent tagged
proteins and follow them as cells respond to stimuli. With cDNA
libraries, the design of cell-based large scale experiments open ways to
identify the protein networks that underlie complex cellular mechanisms.
But such experiments depend on automated acquisition and analysis of
microscope images from many samples.
Various analysis methods applied to cell-based assays will be described.
Specialized aspects of biological quantitative imaging will be
discussed, and examples of segmentation, quantification and comparison
of multicolor and time-lapse images will be shown.
Dans ce séminaire nous allons étudier la convenance sur
l'utilisation de la matrice de Pascal dans le dessin des filtres
numériques. Cette matrice nous permet d'effectuer la transformation de
la fonction de transfert H(s) pour obtenir sa version discrète H(z)
d'une manière simple. Egalement la matrice inverse de Pascal est
utilisée pour transformer H(z) dans H(s) sans avoir besoin de calculer
le déterminant du système.
The presentation is dedicated to emerging aspects of stochastic image
modeling in critically sampled and overcomplete transform domains for
image restoration and denoising and consists of three main parts. In the
first introduction part, we briefly present stochastic image processing
(SIP) group and its current research activity. The second part of
presentation reviews robust image restoration for radar and radiometry
imaging
systems.
First, we consider the general problem formulation and
corresponding solution based on a penalized maximum likelihood estimate.
The
relationship to robust M-estimators and maximum a posteriori probability
methods will be indicated for different stochastic image priors and
noise
distributions. We also consider non-coherent imaging systems based on
sparse antenna arrays and demonstrate some practical results. The third
part of presentation is dedicated to the important problem of stochastic
image modeling in the transform domains. We consider the!
state-of-the-art stochastic image models and different classes of
multiresolution transforms. We will focus on important class of
non-stationary
image models in different applications and will analyze its main
shortcomings based on an example of estimation-quantization (EQ) model.
Finally,
we will introduce a novel edge process (EP) model for critically sampled
and overcomplete domains and demonstrate its main advantages. In
conclusion, some upper bounds for the EP model performance in image
denoising application will be demonstrated.
Sensor networks have emerged as a fundamentally new tool for
monitoring spatial phenomena. This talk will describe a theory and
methodology for estimating inhomogeneous fields using wireless sensor
networks. Inhomogeneous fields are composed of two or more homogeneous
regions (e.g., constant-valued, smoothly-varying, stationary Gaussian,
etc.) separated by boundaries. The boundaries, which correspond to
abrupt spatial changes in the field, are non-parametric 1-d curves or
2-d surfaces (in a 2-d or 3-d field, respectively). The sensors make
noisy measurements of the field, and the goal is to obtain an accurate
estimate of the field at some desired destination (typically remote
from the sensor network). The presence of boundaries makes this
problem especially challenging. There are two key questions: 1. Given
n sensors, how accurately can the field be estimated? 2. How much
energy will be consumed by the communications required to obtain an
accurate estimate at the destination? Theoretical upper and lower
bounds on the estimation error and energy consumption will be
discussed. A practical strategy for estimation and communication will
be presented. The strategy, based on a hierarchical data-handling and
communication architecture, provides a near-optimal balance of
accuracy and energy consumption.
The nonparametric multiscale algorithms presented here are powerful new
tools for photon-limited signal and image denoising and Poisson inverse
problems. Unlike traditional wavelet-based multiscale methods, these
algorithms are both well suited to processing Poisson data and capable
of
preserving image edges. The recursive partitioning scheme underlying
these
methods is based on multiscale likelihood factorizations of the Poisson
data model. These partitions allow the construction of multiscale signal
decompositions based on polynomials in one dimension and multiscale
image
decompositions based on platelets in two dimensions. We originally
developed platelets for medical image reconstruction problems, and more
recently we have successfully applied them to problems in astronomical
imaging. Platelets are localized functions at various positions, scales
and orientations that can produce highly accurate, piecewise linear
approximations to images consisting of smooth regions separated by
smooth
boundaries. Polynomial- and platelet-based maximum penalized likelihood
methods for signal and image analysis are both tractable and
computationally efficient. Simulations establish the practical
effectiveness of these methods in applications such as Gamma Ray Burst
intensity estimation and medical and astronomical image reconstruction;
statistical risk analysis establishes the theoretical near-optimality of
these methods.
Ce seminaire presente une chaine de traitement pour l'extraction
des batiments a partir d'images aeriennes. Dans un premier temps,
nous nous focalisons sur la d´etection des batiments rectangulaires
qui sont le type de construction le plus repandu. Nous etendons
notre methode aux batiments plus complexes, qui peuvent etre
decomposes en plusieurs rectangles. Les rectangles obtenus
permettent d'ameliorer la reconstruction 3D du Modele Numerique
d'Elevation (MNE).
La segmentation du MNE et de l'ortho-image permet l'extraction
du sur-sol. Nous calculons un critere de ressemblance
entre chaque region et leur meilleur rectangle associe.
Pour les batiments complexes, nous proposons un algorithme
de division des regions. Le decoupage optimise iterativement
notre critere de ressemblance. L'approche est illustree sur des
donnees synthetiques et reelles.
L'estimation des structures rectangulaires n'est pas correctement
localisee ni dimensionnee. Nous presentons un modele parametrique
deformable qui permet d'ameliorer ces caracteristiques.
Les estimations rectangulaires finales sont utilisees avec
leur altitude extraite du MNE pour obtenir une scene 3D precise.