
Séminaires
Les séminaires du projet Ariana ont lieu à
l'INRIA Sophia Antipolis (plan),
la salle ainsi que les résumés (en français
ou/et en anglais) étant affichés dès que possible.
Si vous le souhaitez, vous pouvez consulter l'agenda des séminaires
des années précédentes :
2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, et 1998. Anciens séminaires du projet Ariana :
Titre 
Intervenant 
Date/Lieu 
Résumé 
How will Global Warming affect SophiaAntipolis? 
James B. PAWLEY Professeur Université du Wisconsin, ÉtatsUnis 
14/12/2009 14h30 Euler violet 

Résumé (anglais) :
Climate change is a unique societal challenge in many ways. Climate determines temperature and rainfall, the factors every ecologist knows constrain the types of ecosystem that can exist in any specific area. The same variables also determine if agriculture can be productive. It is now clear that humans are changing the global climate by modifying the radiative properties of the planet. This has been done by introducing greenhouse gasses to the atmosphere or by changing the planetary albedo. Failure to act to reduce this interference vigorously and soon may lead to climate change severe enough to compromise the human food supply,thereby placing the very survival of our civilization at risk.
This problem is complicated by the fact that almost every facet of modern industrial society is dependent on the consumption of huge amounts of energy, and most of this is created by burning fossil fuels, a process that produces greenhouse gasses. If we are to escape from the spectre of climate chaos, we will have to reevaluate almost every aspect of modern society in order to reduce energy use and we must and do this on a time scale of a decade or at most two. Previous societal shifts such as the invention of agriculture or the advent of carbonbased industrial society took place over centuries. 

On the notion of prediction error for image and video compression 
Philippe SALEMBIER Professeur Université Polytechnique de Catalogne, Espagne 
07/12/2009 14h30 salle Coriolis 

Résumé (anglais) :
In this talk, a recent and ongoing research work dealing with the notion of prediction error in the context of image and video compression is presented. Prediction error is most of the time defined as the result of computing the difference between the original signal and a certain prediction of this signal. As this notion is central in almost all compression techniques, we investigate whether the definition of prediction error can be modified so that better ratedistortion performances can be achieved. Two cases will be presented.
The first one deals with still image compression. Starting from the idea of wavelet decomposition implemented via a lifting scheme, a new transform called Generalized Lifting will be defined. The key of the new transform is a redefinition of the prediction error calculation in the lifting scheme. We will stress that the Generalized Lifting depends on a statistical characterization of the signal to encode and we will investigate several cases and scenarios. Experimental results will illustrate the interest of the approach.
In the second part of the talk, we will transpose these ideas to the problem of video compression. In this context, the notion of prediction error is also central as it is the basis of motion compensated video codec on which all current standards are based. We will report on recent results that show that this kind of ideas may be promising for video compression. 

Internetinspired 3dimensional modeling of the human habitat 
Franz LEBERL Professeur Université technique de Graz, Autriche 
16/11/2009 14h30 Euler Violet 

Résumé (anglais) :
The creation and upkeep of a global “Exabyte 3D World Model” may well be an important task for
novel visual computing at its intersection with mapmaking and the Internet. At its inception stood the
idea of a “Virtual Earth”, understood to be a 3D geometry model of the World, accessible through the
Internet and supporting locationaware Internetapplications. The concept was introduced by
Microsoft’s Bill Gates in 2005, and considerable resources have been invested to implement it in the
form of phototextured triangulated point clouds of the Bald Earth and the associated vertical objects.
Moving forward, this may well morph into a 3D model of the global “Human Habitat”. A first difference
is the focus on human scale objects as they are experienced in urban streets, shops, landmark
buildings. A second difference is a transition from point clouds (“eye candy”) to semantically
interpreted and searchable objects. A third difference is the changeover from collecting information by
industrial systems towards the reliance on “crowdsourcing”. What are then the source data for such
an ambitious undertaking? What are the computer vision methods? Why is this being investigated?
What is the grand vision? Who is doing this?
These questions are being addressed in the talk, based on insights gained through interactions with
Microsoft’s Virtual Earth initiative (now BingMaps), and through a continuous string of University
research projects since the early 1990’s. 

Unsupervised changedetection methods for singlechannel and multichannel SAR images 
Gab MOSER PostDoctorant DIBE, Université de Gênes, Italie 
05/11/2009 15h30 Kahn building, amphithéâtre Morgenstern 

Résumé (anglais) :
In applications related to environmental monitoring and disaster management, synthetic aperture radar (SAR) presents a great potential thanks to its insensitivity to atmospheric and Sunillumination conditions. This is further enforced by current SAR missions (i.e., COSMO/SkyMed and TerraSARX) that allow multitemporal very highresolution images to be acquired with very short revisit times (up to around 12 hours). However, exploiting this potential requires accurate and automatic techniques to generate change maps from SAR images collected over the same geographic region at different times.
After recalling the basic ideas about the problem of unsupervised change detection with SAR data, three methods are presented in the seminar to address this problem with singlechannel and multichannel (i.e., multipolarization and/or multifrequency) SAR. First, a noncontextual method based on the integration of minimumerror unsupervised thresholding with nongaussian SARspecific probability density models and Mellintransformbased statistics is described. Then, two contextual methods, based on the integration of Markov random fields with a SARspecific modified Fisher transform and with multisource datafusion concepts, respectively, are presented. Experimental results with mediumresolution SAR and very highresolution COSMO/SkyMed images related to different applicative scenarios, are discussed. 

Advanced 4D microscopy to study trafficking and spatiotemporal organization of intracellular membrane, at the single cell level. 
Jean SALAMERO Directeur de Recherche Institut Curie, Paris 
12/10/2009 14h30 Salle Coriolis 

Résumé (anglais) :
The study of membrane plasticity and the role of molecular “machines” in the control of biogenesis of the endocellular membranes have highlighted the crucial role of the “Rab” GTPases family as organizing centers of functional molecular platforms (1). Yet, to understand the regulation and coordination of these molecular assemblies, which are responsible for intracellular dynamic architectures, a more global vision, the development and the correlation of approaches at different spatial and temporal scales are needed.
Our current objectives are:
1) Expand our research activities of individual known partners of Rab proteins in the endocytic recycling pathway (2,3). Focused biological models will be presented and will serve as guidelines (4,5).
2) Consider the mosaic of "Rabs domains” and their progressive conversion depending on the dynamics of fluorescently labeled cargos (4).
3) In these In Vivo biological contexts, develop and focus on dynamic studies and approaches (6,7,8) aimed to measure and understand the multispecific nature of these complexes.
4) Model and assess the functional coordination between the different molecular machineries involved in the biogenesis and stability of cellular membranes and their regulation.
Considering the "fickle" nature of such dynamic architectures, the current performance of image acquisition systems and the analytical tools at our disposal, many technological challenges must be overcome. Dynamic aspects of perspectives described above require conceptual developments, particularly in the field of microscopy imaging. Moreover, to extract maximum information on the same sample, the development of an adapted microscopy, correlating different modalities, is needed. Last but not least, accurate image descriptors, allowing automatic detection and classification of molecular behavior in space and time, are indispensable.
(1)Stenmark H. Rab GTPases as coordinators of vesicle traffic. Nat Rev Mol Cell Biol. 2009;10(8):51325.
(2)Pasqualato S, SenicMatuglia F, Renault L, Goud B, Salamero J, Cherfils J. The structural GDP/GTP cycle of Rab11 reveals a novel interface involved in the dynamics of recycling endosomes. J Biol Chem. 2004;279(12):114808.
(3)MisereyLenkei S, Waharte F, Boulet A, Cuif MH, Tenza D, El Marjou A, Raposo G, Salamero J, Héliot L, Goud B, Monier S. Rab6interacting protein 1 links Rab6 and Rab11 functions. Traffic. 2007 (10):1385403.
(4)Mc Dermott R, Ziylan U, Spehner D, Bausinger H, Lipsker D, Mommaas M, Cazenave JP, Raposo G, Goud B, de la Salle H, Salamero J, Hanau D Birbeck granules are subdomains of endosomal recycling compartment in human epidermal Langerhans cells, which form where Langerin accumulates.. Mol Biol Cell. 2002 (1):31735
(5)UzanGafsou S, Bausinger H, Proamer F, Monier S, Lipsker D, Cazenave JP, Goud B, de la Salle H, Hanau D, Salamero J. Rab11A controls the biogenesis of Birbeck granules by regulating Langerin recycling and stability. Mol Biol Cell. 2007 (8):316979.
(6)Racine V, Sachse M, Salamero J, Fraisier V, Trubuil A, Sibarita JB. Visualization and quantification of vesicle trafficking on a threedimensional cytoskeleton network in living cells. J Microsc. 2007;22521428.
(7)Pécot T, Kervrann C, Bardin S, Goud B, Salamero J. Patchbased Markov models for event detection in fluorescence bioimaging. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2008;11(Pt 2):95103.
(8)Chessel A, Cinquin B, Bardin S, Salamero J, Kervrann J. Computational geometry for convexity based geometrical scalespaces and modal decompositions. Applications to light microscopy video imaging. SSVM 2009, in Lecture Notes in Computer Science (LNCS) 2009, 5567; 770781.


Neighborhoodwise multiscale decision fusion for redundancy detection in image pairs 
Charles KERVRANN Senior Research Scientist IRISA/INRA Jouy en Josas 
12/10/2009 16h00 Salle Coriolis 

Résumé (anglais) :
To develop unsupervised change detection algorithms required for investigation in videomicroscopy and cell biology, new models able to capture spatiotemporal regularities and geometries present in an image pair are needed. In contrast to the usual pixelwise methods and Markov Random Fields methods, we propose a patchbased formulation for modeling semilocal interactions and detecting local or regional changes. By introducing scores (dissimilarity measures) to compare patches and binary local decisions, we design collaborative decision rules that use the total number of detections made by individual neighboring pixels, for different patch sizes. In this first part of the talk, we will describe the patchbased representation for image pair analysis and present collaborative decision rules in neighborhoods. In the second part, we present the whole approach and a patchspace framework to fuse binary decisions with statistical tests, at different spatial scales. We revise the usual L2 distance and present other dissimilarity measures, more robust to gradual variations in the appearance. Finally, we present the practical algorithm and analyze its properties. Experimental results in videomicroscopy (TIRF imaging) demonstrate that the detection algorithm (with no optical flow computation) performs well at detecting meaningful changes and appearing/disappearing spots at the cell membrane. We also illustrate the approach on image pairs for other computer vision applications including videosurveillance and for a variety of illumination conditions and signaltonoise ratios. 

A New Method of Approximate Bayesian Inference on Diffusion Process Parameters 
Simon WILSON Lecturer School of Computer Science and Statistics Trinity College Dublin, Ireland 
27/07/2009 14h30 salle Euler bleu 

Résumé (anglais) :
Most diffusion processes are observed only at discrete time intervals, making both likelihood based and Bayesian methods of inference nontrivial. To overcome this problem Bayesian inference is centred around introducing m latent data points between every pair of observations. However it has been shown that as mincreases, one can make very precise inference about the diffusion coefficient of the process via the quadratic variation. This dependence results in slow mixing of the naive MCMC schemes and it worsens linearly as the amount of data augmentation increases. Various different approaches have been proposed to get around this problem. Some of them involve transforming the SDE, while most others present innovative MCMC schemes.
We propose a new method to approximate Bayesian inference on the diffusion process parameters. Our method is simple, computationally efficient, does not involve any transformations, and is not based on the MCMC approach. Principle features of this new method are Gaussian approximation proposed by Durham and Gallant (2002) and a grid search to explore parameter space. In this paper we first introduce our new method and then compare its performance with recently proposed MCMC based schemes on several diffusion processes. 

Meanshift for shape inference 
Rozenn DAHYOT Lecturer School of Computer Science and Statistics, Trinity College Dublin, Ireland 
06/07/2009 14h30 salle Euler bleu 

Résumé (anglais) :
My talk will present some recent results on shape inference using kernel modelling and the associated gradient ascent technique Meanshift to find the relevant modes. This will be illustrated with a new framework to perform the Hough transform to infer lines and hyperplanes (ie robust
multiple regression). This approach will also be illustrated by some recent results for estimating (3D) convex hull of an object using multiple
camera views.
References:
 Statistical Hough Transform, R. Dahyot, PAMI (2009) to appear.
 Meanshift for Statistical Hough Transform, R. Dahyot, Technical report, Statistics department Trinity College Dublin Ireland, 2009. 

View Invariance and Its Role in Human Pose and Action Recognition 
Hassan FOROOSH Associate Professor Computational Imaging Laboratory, University of Central Florida, USA 
06/07/2009 16h00 salle Euler bleu 

Résumé (anglais) :
Recognizing human body pose and action in video data is an important and challenging problem in computer vision, with emerging demanding applications in surveillance, and human computer interaction. A major difficulty in this problem is the degree of variability in appearance of human body as a function of viewing angle and camera parameters. Three main approaches have been considered in the literature to tackle view and camera variations: (i) machine learning approaches typically based on a large database of views from many possible viewing angles and cameras, (ii) use of multiple views to acquire 3D structure, (iii) investigating geometric invariants; In this talk I will briefly review pros and cons of each of these three approaches and propose new geometric invariants for motion of an articulated body, e.g. a human body, viewed by a stationary camera. It is shown that these invariants provide remarkably good results under large variations of camera parameters and viewing directions. I will discuss the results in detail and propose some future directions. 

The SURELET Methodology  A PriorFree Approach to Image Denoising 
Thierry BLU Associate Professor Chinese University of Hong Kong 
29/06/2009 14h30 salle Euler bleu 

Résumé (anglais) :
A novel methodology for restoring signal/images from noisy measurements will be presented. Contrary to the usual approaches (Bayesian, sparsebased), there is no prior modelization of the noiseless signal. Instead, it is the reconstruction algorithm itself that is parametrized, or approximated (using a Linear Expansion of Thresholds: LET).
These parameters are then optimized by minimizing an estimate of the MSE between the (unknown) noiseless signal and the one processed by the algorithm. Surprisingly but admirably, it is possible to build such an estimate  Stein's Unbiased Risk Estimate (SURE)  using the noisy signal only, and without making any hypothesis on the noiseless signal. The only hypothesis is on the statistics of the noise (additive, Gaussian).
Examples on image denoising are shown to validate the efficiency of this methodology. 

Bayesian Search for Shapes in Cluttered Image Primitives 
Anuj SRIVASTAVA Professor Statistics Dept. Florida State University USA 
08/06/2009 14h30 Galois building, salle Coriolis 

Résumé (anglais) :
The problem of recognizing shapes in given images is important in many branches of science. A subproblem in that area is to study image primitives (points, edges, corners, etc) and form useful shape hypotheses
by connecting suitable subsets of primitives. For example, one can look for shapes by selecting and connecting points in a point cloud.
We model these primitives as sampled contours that are corrupted by clutter and observation noise. Taking an analysisbysynthesis approach, we simulate
highprobability configurations of sampled contours using shape models learnt from training data to evaluate the given test data. To facilitate
simulations, we develop statistical models for sources of (nuisance) variability: (i) shape variations within classes, (ii) variability in
sampling continuous curves, (iii) pose and scale variability, (iv) observation noise, and (v) points introduced by clutter. The variability in
sampling closed curves into finite points is represented either by: (a) positive diffeomorphisms of a unit circle (b) a Poisson process. Using a
Monte Carlo approach, we simulate configurations from a joint prior on the shapesample space and compare them to the data using a likelihood
function. Average likelihoods of simulated configurations lead to estimates of posterior probabilities of different classes and, hence, Bayesian
classification. 

NonStationary Image Formation: A SateSpace Approach with Astrophysical Applications 
Farzad KAMALABADI Associate Professor University of Illinois at UrbanaChampaign, USA 
08/06/2009 16h00 Galois building, salle Coriolis 

Résumé (anglais) :
The statistical inference of a hidden Markov random process is a problem encountered in numerous signal processing applications including dynamic tomography. In dynamic tomography, the goal is to form images of an object that changes in time from its projection measurements. This talk focuses on the case where the object's temporal evolution is significant and governed by a complex physical model.
The proposed method is based on a statespace formulation which provides a natural and general statistical framework for the systematic reconstruction of dynamic objects when faced with inevitable measurement and modeling
uncertainties. The image reconstruction method is based on a sequential MonteCarlo approach that scales to meet the computational demands of high
dimensional computed imaging problems such as dynamic tomography. Furthermore, a method is introduced for dynamic tomography that has the same computational complexity as filtered backprojection. In addition to a rigorous characterization of the convergence properties of the proposed method, the application of the highdimensional inference technique is illustrated in the remote sensing problem concerned with the 3D reconstruction of the solar atmosphere. The talk will describe the entire image formation process from the space sensors to the inference of the physical parameters. 

Recent Developments in Iterative Thresholding Algorithms 
Mario FIGUEIREDO Associate Professor Institut des Télécommunications, Lisbonne, Portugal 
11/05/2009 14h30 salle Coriolis 

Résumé (anglais) :
Iterative shrinkage/thresholding (IST) algorithms are important elements of the computational toolbox used in signal processing problems where sparse signal representations are sought. Examples,include compressed sensing and signal/image deconvolution.
IST algorithms are typically used to address unconstrained minimization formulations, where the objective function includes a quadratic data term
(corresponding to liner observation model under Gaussian noise) and a nonquadratic regularizer (such as an l1 norm or a TV norm). In this talk,
after briefly reviewing the several ways in which IST algorithms can be derived, as well as several convergence results, I will present some
recent advances: (a) new ways to derive ISTlike algorithms, (b) new accelerated versions of IST, (c) new ISTtype algorithms tailored to nonGaussian observation noise models. 

VHSR IMAGERY POTENTIALITIES FOR CLEARED SHRUBLANDS DETECTION IN OPERATIONAL FIRE MANAGEMENT 
Vincent THIERION PhD Student Centre Environnement et Risques, Ecole des Mines d'Alès 
27/04/2009 Coriolis 

Résumé (anglais) :
The French mediterranean region is subjected to frequent forest fires during summer period. A longterm experience of firemen in fire fighting has permitted to limit major disasters and protect local populations.
This experience is based on a continuous prevention such as:
o Forestry planning,
o Firebreaks zones management,
o Vegetation dynamics surveys,
o Continuous monitoring by operational teams (patrols, watchtowers…)
The DFCI actions (Defence of Forest against fires prevention’s actions) aim at developing these
objectives in implementing and maintaining cleared tracks inside forest zones. These tracks can represent a very important length (about 8000 km) for one department (Var: 6032 km²). The cleared area on both sides of tracks is about 20m, and its vegetation needs to be as lower as possible to allow maximum security for fighting vehicules. Due to the difficult accesses and the large sizes of areas to survey, new technologies of earth observations like remote sensing are often needed to efficiently assess fuel potentiality.
Since 2003, the CNES (Centre National d’Etudes Spatiales) has developed the ORFEO Accompaniment
Program (Optical and Radar Federated Earth Observation) which is set up to prepare, accompany and
promote the use and the exploitation of images derived from Pleiades and CosmoSkymed sensors. The main objective of this program is to assess the new capabilities and performances of the ORFEO systems thanks to simulated satellite images quite similar to Pleiades as QuickBird.
Recently, Geographic Information System capabilities allowed to enhance fire brigades missions
especially forest fires management. As a new potentiality, the remotesensing technology and highresolution satellite image analysis seem to allow a more accurate and efficient forests vulnerabilities monitoring based on various vegetation structure.
Within this framework, it seemed interesting and valuable to develop a study according to operational
requirements to precisely assess the potentiality of very high resolution images (VHSR) to spatialize the
vegetation vulnerability to fire and its annual evolution.
In order to reach these objectives, a couple of highresolution QuickBird satellite images (2006 and
2008) of the Massif des Maures site (in Southeast of France) have been provided by the ORFEO program.
Based on these products, a methodological approach has been developed to precisely detect shrublands
clearing states in order to direct operational clearing actions.
This research aims at developing two main objectives:
 After the characterization of cleared areas physiognomies, according to firemen operational
requirements and ecological structure, a set of methodologies have been develop to extract the
defined ecological typology (pixel and objectoriented classifications)
 Based on results of first phase, an operational prototype is presently designed and implemented in
using Orfeo Tool Box (OTB), originally designed for simple image processing operations.
The final objective of this study is, after assessing the real potentiality of the ORFEO products for
shrublands clearing detection, to develop a prototype dedicated to automatic clearing level assessment for
firemen operations. 

Combinatorial and parallel programming point of views for Markovian Energies Minimization 
Jérôme Darbon Research Scientist UCLA Mathematics Department, Los Angeles, USA 
20/03/2009 14h30 Salle Coriolis 

Résumé (anglais) :
Many image processing processing problems can be formulated as the minimization of a Markovian energy. In this talk, a combinatorial point of view is considered. I focus on the minimization of the Total
Variation with convex data fidelity terms both from a continuous and a discrete point of view. Two algorithms are presented: a) a pure combinatorial algorithm relying on the parametric maximumflow in a network. and b) a combinatorialbased approximation algorithm that allows an extremely efficient parallel programming implementation.
Several applications such as crystalline mean curvature flow, deconvolution and compressive sensing reconstruction are also presented. 

Lidar remote sensing of forest: an overview 
Cédric VEGA PostDoc UMR TETIS CemagrefCiradENGREF, Maison de la Télédétection en LanguedocRoussillon, Montpellier 
16/03/2009 Coriolis 

Résumé (anglais) :
Light detection and ranging is an airborne technique used to produce dense and accurate measurements of the elevation of the earth’s surface using a combination of laser ranging, GPS and inertial navigation systems. In forest environments, lidar data allows the acquisition of both ground elevation and vegetation height data owing the capacity of laser pulses to penetrate even dense canopies.
The seminar will provide an overview of lidar technologies emphasising on forest applications. The first part of the talk will introduce the lidar principles and illustrate some of the main point cloud classification methods. Then techniques for computing and analysing digital terrain and canopy models will be introduced, focusing on methods for extracting forest parameters at both the tree level and plot level. 

Marked point processes and pattern recgonition 
Radu STOICA Assistant Professor Université de Lille 
16/02/2009 CANCELED 

Résumé (anglais) :
This talk starts with the presentation of some fondemental notions on marked point processes : definition, simulation and statistical inference.
Then all these elements are incorporated in a statistical methodology aiming pattern recognition. The presented "machinery" is then tested on several data sets such as digital images, environmental data and cosmological catalogs. The pertinence of the results is also discussed.
Finally, conclusions and perspectives are depicted. 

Ecological insights given by 2D spectral analysis in ecology 
Nicolas BARBIER PostDoc Université Libre de Bruxelles, Belgique 
12/01/2009 14h30 INRIA Sophia Antipolis salle Coriolis 

Résumé (anglais) :
Spectral analysis allows the characterization of temporal (1D) or spatial patterns (2D) in terms of their scale (frequency) distribution. It is also possible, thanks to crossspectral analysis, to carry out independent correlation analyses at different scales between two variables. These well grounded approaches have rarely been applied to twodimensional ecological datasets. In this contribution we illustrate the potential of the method to characterize, classify and compare the patterns of biological and physical variables.
Examples will be given from our research in both semiarid and rainforest vegetation contexts, with a particular emphasis on the link between emergent properties at the ecosystem scale and plant allometries. 

