|
|
|
|
|
|
|
Electrical and Comp. Eng. Dept. North Carolina State University (USA) |
17 janvier | 10:30 | Salle 003 |
Signal Processing with Alpha-Stable Distributions |
ERCIM Fellow Projet Ariana |
31 janvier | 11:00 | Salle 003 |
|
New York University (USA) |
10 février | 10:30 | Salle 006 |
|
(INT, ENIC, Télécom Paris, Télécom Bretagne) |
21 février | 10:30 | Salle 006 |
|
Imaging and Computer Vision Center Electrical and Computer Engineering Department Drexel University, Philadelphia |
23 mars | 10:30 | Salle 006 |
|
Eurecom Sophia Antipolis, France |
27 mars | 10:30 | Salle 006 |
|
CMLA Cachan, France |
10 avril | 10:30 | Salle 003 |
|
CNR Istituto per le Applicazioni del Calcolo Rome, Italie |
25 mai | 14:00 | Salle 003 |
|
Visiting Researcher Lecturer, Dept. of Statistics, Trinity College, Dublin, Ireland |
5 juin | 10:30 | Salle 003 |
|
University of Michigan Ann Arbor, USA |
23 juin | 10:30 | Salle 006 |
|
MTA-SZTAKI, Acad. des Sciences, Hongrie |
11 juillet | 14:00 | Salle 006 |
|
Signal Processing Laboratory University of Cambridge UK |
4 septembre | 14:00 | Salle 006 |
|
University of Venice, Italy |
23 octobre | 10:30 | Salle 006 |
|
LAM, Université de Reims Champagne Ardenne |
20 novembre | 10:30 | Salle 006 |
|
Meteo-France, CNRS Universite d'Antilles-Guyane |
11 décembre | 14:00 | Salle 006 |
A discrete symmetric random walk is shown to be equivalent to a heat
equation evolution, and an extension to nonlinear evolutions including
Perona-Malik equation is shown to be of utmost importance for
analysis. Upon unraveling the limitations as well as the advantages of
such an equation, we are able to propose a new approach which is
demonstrated to outperform existing approaches, and to lift the
longstanding problem of when to stop the evolution. Substantiating and
illustrating examples of image enhancement and segmentation are
provided.
The problem of segmenting regions of interest from an image (or multiple
images in the cases of stereo and motion) is an extremely important one in
computer vision, and has been much studied. An well-known difficulty is how
region information such as texture, colour and homogeneity can be combined
with boundary information such as intensity gradients. In the multiple image
case, there is also the question of whether computation of a dense disparity
or flow should precede boundary/region identification and correspondence or
follow it.
In this talk I will describe a new form of energy functional defined on
closed curves in a manifold, whose minimum argument defines a segmented
boundary/region. The energy functional can be used to segment regions of
interest from both single images and stereo pairs and motion sequences. In
the latter cases, a segmented region is found in all images along with the
boundary correspondences by using the product space of the image planes.
There is no need to compute a dense disparity or flow. The form of the
energy functional means that whenever we are dealing with information from a
single image (which includes the individual elements of a stereo pair or
image sequence), arbitrary region and boundary information can be unified by
transforming region information into equivalent boundary information; thus
we can combine the best features of region- and boundary-based approaches.
The energy is also extremely general, allowing the incorporation of large
variety of image information. All choices for both single and multiple
images can be globally optimised using the same, polynomial-time algorithm,
by casting the problem as a minimum ratio cycle problem in the discretised
plane. There is also a second polynomial-time algorithm, applicable to
smaller class of energies, that is extremely parallelizable. The energy is
scale-invariant in many cases, being an energy density on the boundary, thus
removing the uncontrolled bias towards small or large regions present in
many models.
Statistical signal and image processing have been dominated for a long
time with the Gaussian assumption. This is not surprising: firstly,
life is easy with the Gaussian distribution, in most cases it leads to
linear equations. Secondly, for one after rigour: it is justified with the central limit
theorem. However, it escaped from the vision of many in the field that the
Gaussian distribution is not the only distribution satisfying the central limit
theorem.
In this talk, we will challenge the "normality" of the Gaussian
distribution by demonstrating its shortcomings in modelling impulsive data,
and introducing an alternative distribution family namely the
alpha-stable distribution.
For this model to be of any interest to us, one should also
demonstrate the feasibility of developing computationally attractive
new optimal signal processing algorithms for alpha-stable distributions.
To this end, we present simple linear and nonlinear modelling techniques
and demostrate an application in audio restoration.
We also present the first numerically stable
analytical representation for the alpha-stable pdf and demonstrate
how it can be used to design optimal receivers for radar applications.
Finally, we demonstrate preliminary results in texture modelling
and mention future directions in image processing and communications
which are plenty, interesting and come with a promise of good fun.
Le radar à synthèse d'ouverture (RSO) aéroporté ou spatial est un
puissant outil d'observation, permettant notamment d'acquérir des
images de haute résolution de la surface terrestre par tout temps, de jour
comme de nuit. Le phénomène de speckle, qui se traduit par une très forte
granulation dans l'image détectée, rend cependant extrêmement difficile
l'interprétation automatique des données RSO. Cette présentation porte sur la
segmentation d'images RSO, en particulier par l'approche contour.
La segmentation consiste à diviser l'image en régions. Elle constitue
un premier pas dans l'analyse de l'image, facilitant l'estimation
desparamètres caractéristiques des régions.
Les détecteurs de contours déjà proposés pour l'imagerie RSO
travaillent sur des données détectées et supposent que le speckle est non corrélé.
En pratique, le speckle est corrélé spatialement, ce qui entraîne une
certaine dégradation des performances de ces opérateurs. A partir de
l'estimateur maximum de vraisemblance (MV) de la réflectivité radar,
nous développons l'opérateur optimal rapport de vraisemblance
généralisé, qui exploite la nature intrinsèquement complexe des données RSO
monovues afin d'éviter une perte de performance due à la corrélation du
speckle. Les aspects spatiaux de la détection de contours, tels que la taille
et la forme de la fenêtre d'analyse, le nombre de directions à examiner, et
la présence de contours multiples, sont également abordés.
Nous proposons d'utiliser des méthodes robustes basées sur
l'algorithme de ligne de partage des eaux, en particulier le seuillage des
dynamiques de bassin, pour extraire des contours fermés et squelettisés, définissant
une segmentation de l'image. Le nombre de faux contours peut être
réduit en post-traitement par la fusion de régions adjacentes ayant des
propriétés similaires. L'estimateur MV de la position d'un contour dans une image
RSO complexe est établi. Deux méthodes de repositionnement de
contours, l'une basée sur les champs de Markov et le modèle de Potts, l'autre
sur les contours actifs, sont proposées.
L'apport de la segmentation pour des traitements ultérieurs est
illustré pour le filtrage adaptatif de speckle et pour la classification
contextuelle supervisée.
Aligning experimental data into a standard coordinate system (SCS) is of
great interest to neuroimaging science, and a necessary tool in
support of genomics efforts for gene expression. Multimodality imaging, noisy
data and transformations are frequent difficulties. Geometric based
alignments carry across-multimodalities, yet have to be well structured to handle
the noisy/occluded data.
In this work, we introduce a non-iterative
geometric-based method to align 3D brain surfaces into standard
coordinate system (SCS), which is based on a novel set of surface landmarks
(e.g., planar embilical points, zero torsion points, etc..), which are
intrinsic and are computed from the differential geometry of the surface. This
is in contrast to existing methods that depend on anatomical landmarks that
require expert intervention to locate - a very hard task.
The landmarks are local, and are preserved under affine transformations. To reduce
the sensitivity of the landmarks to noise, we use a B-Spline surface
representation that smoothed out the surface prior to the computation
of the landmarks. The alignment is driven by establishing correspondences
between the landmarks after a conformal sorting based on derived
absolute invariants (volumes confined between parallelepipeds spanned by sets
of the landmark point quadruplets). The method is tested for intra- and
inter-brain alignments while entertaining cubic nonlinear transformations.
The overall aim of this work is to provide a hierarchical framework of
methodologies for recognising objects represented as line patterns from
large structural libraries.
One of the novel aspects of our work is a new shape representation for
rapidly indexing and recognising line-patterns from large databases. The
basic idea is to exploit both geometric attributes and structural
information to compute a two-dimensional relational pairwise geometric
histogram. Shapes are indexed by searching for the line-pattern that
maximises the cross-correlation of the normalised histogram bin-contents.
This technique provides the first level of the hierarchy, which is used to
prune the database of many unwanted candidates.
The intermediate level of our hierarchical framework is based on a novel
similarity measure for object recognition from large libraries of
line-patterns. This operates at a more local image level than the histogram
based indexing layer. The measure is derived from a Bayesian consistency
criterion and resembles the Hausdorff distance. This consistency criterion
has been developed for locating correspondence matches between attributed
relational graphs using iterative relaxation operations. Our aim here, is to
simplify the consistency measure so that it may be used in a non-iterative
manner without the need to compute explicit correspondence matches. This
considerably reduces the computational overheads and renders the consistency
measure suitable for large-scale object recognition.
A Bayesian graph matching algorithm for data-mining from large structural
databases operates as final level of the hierarchy. The matching algorithm
uses both edge-consistency and node attribute similarity to determine the a
posteriori probability of a query graph for each of the candidate matches in
the reduced database generated by the lower levels of the hierarchy. The
node feature-vectors are constructed by computing normalised histograms of
pairwise geometric attributes. Attribute similarity is assessed by computing
the Bhattacharyya distance between the histograms. Recognition is realised
by selecting the candidate with the largest a posteriori probability.
For each of the above methodologies a thorough sensitivity study is
undertaken for a library of over 2500 lines-patterns containing radar aerial
images and a number of other images types. The analysis reveals the
robustness of each method on its own as well as within the hierarchical
framework. This suggests that there is a degree of complementarity between
the approaches.
Nous décrivons une nouvelle approche pour la calibration
des paramètres des fonctionnelles d'énergie utilisées
en analyse d'images. Sans s'appuyer sur un modèle statistique a
priori, comme dans un cadre bayésienne, le
principe est générer une quantité arbitraire de mauvais exemples à
partir d'une base d'apprentissage, et
d'affiner les paramètres de manière à ce que les bons exemples forment
des minima locaux de l'énergie.
Nous montrerons également comment cette approche peut être étendue
pour apprendre une forme fonctionnelle des
paramètres relativement aux données observées, qui peut être mis en place, par exemple, dans un cadre de
restauration ou de segmentation d'images.
Référence : Calibrating parameters of cost functionals (à paraître, actes ECCV 2000).
We consider the weak plate functional, proposed
by Blake and Zisserman for visual reconstruction,
which depends on free discontinuities, free gradient
discontinuities and second order derivatives.
It is shown how this functional can be approximated
by elliptic functionals defined on Sobolev spaces.
The approximation takes place in a variational sense,
the De Giorgi Gamma-convergence, and extends to this
second order model an approximation theorem of the Mumford-Shah
functional obtained by Ambrosio and Tortorelli.
For the purpose of the illustration of the Gamma-convergent
approximation some numerical examples on simple synthetic images
are presented. This is a joint work with Luigi Ambrosio and Loris Faina.
Markov random fields are used extensively in image segmentation. In
this talk I describe a class of models, the double Markov random
field, for images composed of several textures, and how to use the
model class for image segmentation. I show that many approaches to
Bayesian image segmentation are special cases of this model. From a
simulation study, a comparison between these models is made.
If time permits, I will also discuss how to extend these approaches to
the case where the number of texture classes is not known.
One of the most challenging problems in automatic target recognition (ATR)
for remote sensing is reliable detection of poorly illuminated objects buried
in high clutter backgrounds. When the clutter statistics are unknown or
highly variable, the false alarm rate of classical linear or quadratic
detection algorithms, e.g. the adaptive matched filter/detector, cannot be
controlled and target detection becomes unreliable. In this talk we will
present methods for improving detection performance using the generalized
likelihood ratio (GLR) test and the maximal invariant (MI) test for cases
where clutter uncertainty can be described by an orbit induced by group
actions on parameter space. Our focus application will be the difficult
"deep hide" problem where the target straddles a boundary between two unknown
clutter types.
This lecture will address the problem how to interpret image structure when it is evaluated through massively parallel computation, considering a finite neighborhood in each iteration.
Applications may include MRF segmentation, motion tracking, image compression and visualization by painting rendering. Some solutions will be demonstrated for
These processes are controlled in parallel and are themselves organized by the structure which they are evolving through an iterative process. Several results show that most of image processing tasks and a broad class of image analysis problems can be solved in parallel structures by self-organizing methods. Here parallel structure means that different processors run on the same task at the same time. It is found that "parallelism" and "self-organization" usually are coupled: if a process is implemented in a parallel structure, it can be described by using some type of self-organizing in the evolution of the solution.
In this talk, an introduction to numerical Bayesian methods will be
given and new methods for sequential applications will be introduced.
Some results for audio and image restoration will be presented.
It is well known that the problem of matching two relational structures
can be posed as an equivalent problem of finding a maximal clique in a
(derived) association graph. However, it is not clear how to apply this
approach to computer vision problems where the graphs are hierarchically
organized, i.e., are trees, since maximal cliques are not constrained to
preserve the partial order. Here we provide a solution to the problem
of matching two attributed trees by constructing a (weighted) association
graph using the graph-theoretic concept of connectivity. We prove that
in the new formulation there is a one-to-one correspondence between maximal
weight cliques and maximal similarity subtree isomorphisms. This allows
us to cast the tree matching problem as an indefinite quadratic program
using a recent extension of the so-called Motzkin-Straus theorem. We then use
"replicator" equations, a class of dynamical systems developed in
evolutionary game theory, to solve it. Such continuous solutions to
discrete problems are attractive because they can motivate analog and
biological implementations. We illustrate the power of the approach by
matching articulated and deformed shapes described by shock trees. An
extension of this framework to deal with many-to-one matchings will also
be presented.
[joint work with K. Siddiq (McGill) and S. W. Zucker (Yale)]
La séparation aveugle de sources est un problème essentiel en traitement
du signal. Plusieurs sources physiques émettent simultanément des
signaux qui sont reçus par des capteurs. Les techniques de séparation
aveugle de sources consistent à retrouver les signaux émis par chacune
des sources. Elles s'appliquent dans des situations où la connaissance
sur le processus de mélange et sur les sources est très faible. Leur
vaste domaine d?application s'étend de l'ingénierie : traitement de la
parole, des signaux radar, au domaine médical : électroencéphalographie,
électrocardiographie,....et pourquoi pas en télédétection.
Je présenterai d'abord le modèle de mélange et ses hypothèses, puis le
principe des méthodes de "démélange" et leurs limitations. Le regain
d'intérêt (1990) pour ce sujet a permis la modernisation de l'analyse en
composantes principales, sous la forme de l'analyse en composantes
indépendantes. J'exposerai ces méthodes et en particulier différents
algorithmes basés sur les corrélations locales ou sur les statistiques
d'ordres élevés. Je montrerai quelques résultats obtenus par leur mise
en oeuvre en analyse spectrale par résonance magnétique nucléaire. Puis
je montrerai l'apport de cet outil statistique exploratoire à la
description physique de la radiosource 3C120.
D. Nuzillard, S. Bourg, J.-M. Nuzillard : Model-free analysis of
mixtures by NMR using blind source separation, Journal of magnetic
resonance 133, 358-363, 1998
D.Nuzillard & A.Bijaoui : Blind source separation and analysis of
multispectral astronomical images. Astronomy and Astrophysics supplement series, 147,
novembre 2000 sous presse.
Les modèles de circulation générale (GCMs), qui sont utilisés pour
étudier l'évolution du climat terrestre, nécessitent des données
permettant de calculer les caractéristiques optiques des nuages
(rayon effectif des gouttelettes nuageuses et épaisseur optique)
à l'échelle globale. Les imageurs qui équipent les satellites
météorologiques permettent la détermination de ces paramètres.
Néanmoins, les algorithmes utilisés pour cette détermination sont
souvent de nature intuitive et/ou empirique, ce qui réduit leur
domaine d'applicabilité. On présentera une méthode de détermination
des caractéristiques optiques des nuages qui s'appuie sur une
analyse fondée théoriquement et qui utilise simultanément les images
transmises dans l'IR par le satellite GOES et celles transmises dans
le domaines des micro-ondes par le satellite de la série DMSP.
La validation de la méthode est effectué par comparaison avec
les résultats des observations collectées durant le " International
Satellite Cloud Climatology Project.