|
Séminaires
Les séminaires du projet Ariana ont lieu à
l'INRIA Sophia Antipolis (plan),
la salle ainsi que les résumés (en français
ou/et en anglais) étant affichés dès que possible.
Si vous le souhaitez, vous pouvez consulter l'agenda des séminaires
des années précédentes :
2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, et 1998. Anciens séminaires du projet Ariana :
Titre |
Intervenant |
Date/Lieu |
Résumé |
Fully Bayesian Source Separation with Application to the Cosmic Microwave Background |
Simon Wilson Senior Lecturer Trinity College, Dublin, Ireland |
17/12/2007 |
|
Résumé (anglais) :
Blind source separation refers to the inferring of the values of sources from observations that are linear combinations of them. Both the sources and the matrix of linear 'mixing' coefficients may be unknown. Here we describe an approach where the sources are assumed to be Gaussian mixtures, which may be independent or dependent. An MCMC procedure has been developed that implements a fully Bayesian procedure e.g. it computes the posterior distribution of sources, their Gaussian mixture parameters and the matrix of linear coefficients from the data.
The method is applied to recovery of the cosmic microwave background (CMB). The CMB is one of many sources of extraterrestrial microwave radiation and we observe a weighted sum of them from the Earth at different frequencies. Its accurate reconstruction is of great interest to astronomers and physicists since knowledge of its properties, and in particular its anisotropies, will place strong restrictions on current cosmological theories. From the perspective of a Bayesian solution, this application is interesting as there is considerable prior information about the linear coefficients and the sources. Results from the analysis of data from the WMAP satellite will be presented, where microwave radiation is observed at 5 frequencies and separated into sources, including the CMB. A discussion of the many outstanding issues in this problem is also presented. |
|
Video Geometry without Shape |
Tamas Sziranyi Professor MTA SZTAKI, Hungary |
04/12/2007 14h30 Coriolis |
|
Résumé (anglais) :
Retrieving geometrical information in videos without any a priori information about the image structure or possible
shapes: registration through co-motion statistics and focus-map through Bayesian iterations. Structure and objects in video images are often not known. If we search for camera registration or focused areas of these situations, some a priori knowledge was needed. Now I present two methods for finding structeres in video images without any preliminary object definition: Co-motion statistics and estimation of relative focus map.
First, a new motion-based method is presented for automatic registration of images in multi-camera systems, to permit synthesis of wide-baseline composite views. Unlike existing static-image and motion based methods, our approach does not need any a priori information about the scene, the appearance of objects in the scene, or their motion. We introduce an entropy-based preselection of motion histories and an iterative Bayesian assignment of corresponding image areas. Finally, correlated point-histories and data-set optimization lead to the matching of the different views. Another application of co-motion methods is finding the vanishing-point position for planar reflected images or shadows, or the horizontal vanishing line, making use of motion statistics derived from a video sequence. I also present a new automatic solution for finding focused areas based on localized blind deconvolution. This approach makes possible to determine the relative blur of image areas without a priori knowledge about the images or the shooting conditions. The method uses deconvolution based feature extraction and a new residual error based classification for the discrimination of image areas. We also show the method´s usability for applications in content based indexing and video feature extraction. |
|
Interprêtation d'images pour les applications de télédétection |
Véronique Prinet Associate Professor National Laboratory of Pattern Recognition, LIAMA, Beijing, Chine |
12/11/2007 14h30 Euler indigo |
|
Résumé (français) :
Dans cet exposé, je ferai en premier lieu un survol des activités de recherche en cours dans le groupe RSIU, à savoir, l'analyse de mouvement dans les images de basse résolution, l'indexation d'image, etc. Je présenterai ensuite un travail sur l'analyse de changement structurel dans les images haute résolution (Urban 2007). Je concluerai enfin sur les perspectives de collaborations nouvelles. |
|
BTF texture modeling and compression |
Michal Haindl Professor, Department of Pattern Recognition Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic |
29/10/2007 14h30 Coriolis |
|
Résumé (anglais) :
The recent supreme representation for realistic real-world materials in virtual reality applications is the Bidirectional Texture Function (BTF) which describes rough texture appearance variations for varying illumination
and viewing conditions. Such a function consists of thousands of measurements (images) per material sample. The resulted BTF size excludes its direct rendering in graphical applications and some compression of these huge BTF data spaces is obviously inevitable. In the talk we present several our results in this area, mainly a probabilistic model-based BTF algorithm allowing an efficient extreme compression with the possibility of fast direct implementation inside the graphics card.
The model offers huge BTF compression ratio unattainable by any alternative sampling-based BTF synthesis method. Simultaneously this model can be used to reconstruct missing parts of the BTF measurement space. |
|
Two examples of original approaches in astro-data processing : the problems of unsupervised clustering and sparse image reconstruction |
Olivier Michel Full professor Laboratoire Universitaire d'Astrophysique de Nice |
01/10/2007 14h30 Euler Violet |
|
Résumé (anglais) :
In this presentation, we will focus on two quite different problems motivated by astrophysical data processing.
In the first part, we will present some recent results for unsupervised clustering approaches, using graph-based distances.
These results rely mostly upon some properties of Minimal Spanning Trees and the Prim algorithm. Some motivations based upon the joint use of information theoretic divergence and the graph-based metric will be discussed and illustrated on data such as asteroid spectral reflectances and multi-spectral images from remote sensing. Issues regarding the similarities between the problem encountered in the unsupervised classification approaches and the problem of dimension reduction will be briefly addressed.
In the second part of this talk, some preliminary results obtained in the field of compressed sensing approaches for processing astrophysical data, will be presented. We will focus on the potential of these methods for future high resolution image reconstruction from sets of almost-random observations. Multiple bases interferometry and image reconstruction from sets of low-resolution acquisitions will serve as motivations, and provide a good example for highlighting the main assets and difficulties. |
|
Multiscale Variance Stabilizing Transform for Noise Removal and Spot Detection in Fluorescence Confocal Microscopy |
Bo Zhang Ph.D. Student Lab. d'Analyses d'images quantitatives, Institut Pasteur, Paris, France |
24/09/2007 14h30 Coriolis |
|
Résumé (anglais) :
Fluorescence confocal microscopy images are contaminated by photon and readout noises, and hence can be modeled by Mixed-Poisson-Gaussian (MPG) processes. In this work, we propose a variance stabilizing transform (VST) which allows to convert a filtered MPG process into a near Gaussian process with a constant variance. This VST is then combined with the isotropic wavelet transform, leading to a multiscale VST (MS-VST). We demonstrate the usefulness of the MS-VST for noise removal and for spot detection in confocal microscopy. Experiments show that 1) compared with standard denoising methods adopting the simplified assumption of a Gaussian or a Poisson noise, the MPG model is more realistic and results in a higher denoising performance and less false positives in the detection; 2) the proposed detector provides effective spot extraction from nonuniform backgrounds; 3) for MPG data with low Poisson intensities the MS-VST-based denoising and detection outperform those using the generalized Anscombe transform. |
|
Quantitative Imaging Biomarkers of Cardiovascular Disease |
Wiro Niessen Professor Erasmus MC, University Medical Center Rotterdam |
10/07/2007 14h30 Euler bleu |
|
Résumé (anglais) :
Atherosclerosis, a disease of the vessel wall, is the main cause of morbidity and mortality in the western world. There is increasing evidence that the risk of clinical events, such as heart attacks and stroke, depends more on plaque composition, elastic wall properties, and even biochemical processes that take place in the plaque, than on luminal morphology. State-of-the art imaging techniques, such as MRI, CT, and ultrasound, provide detailed information not only on the vessel lumen, but also on the vessel wall. Owing to the growing complexity and sheer size of cardiovascular imaging data, in combination with the large increase in the number of studies in clinical practice and biomedical research, there is a strong and increasing interest in robust, automated processing tools to aid in the analysis of these data.
In the last decade there has been considerable progress in image processing techniques to enhance vascular imaging data and to quantify the vessel lumen, and these techniques are currently introduced into clinical practice. The development of techniques to quantify the atherosclerotic vessel wall are still in early stages of development. In this talk we will describe the state-of-the-art in the analysis of the vessel lumen and wall. In addition, we will describe image processing techniques to provide clinicians with diagnostic image quality data during radiological and cardiological interventions, in order to assist navigation of instruments in complex, minimally invasive interventions. |
|
Segmentation-Driven Image Fusion using Alpha-Stable Distributions |
Alin Achim Lecturer, University of Bristol, UK |
09/07/2007 14h30 Coriolis |
|
Résumé (anglais) :
We present a novel region-based image fusion framework based on multiscale image segmentation and statistical feature extraction. A dual-tree complex wavelet transform and a statistical region merging algorithm are used to produce a region map of the source images. The input images are partitioned into meaningful regions containing salient information via symmetric alpha-stable distributions. The region features are then modelled using bivariate alpha-stable distributions, and the statistical measure of similarity between corresponding regions of the source images is calculated as the Kullback-Leibler distance between the estimated stable models. Finally, a segmentation-driven approach is used to fuse the images, region by region, in the complex wavelet domain. A novel decision method is introduced by considering the local statistical properties within the regions, which significantly improves the reliability of the feature selection and fusion processes. Simulation results !
demonstrate that the bivariate alpha-stable model outperforms the univariate alpha-stable and generalized Gaussian densities by not only capturing the heavy-tailed behaviour of the subband marginal distribution, but also the strong statistical dependencies between wavelet coefficients at different scales. The experiments show that our algorithm achieves better performance in comparison with previously proposed pixel and region-level fusion approaches in both subjective and objective evaluation tests.
|
|
Spatio-Temporal Tomographic Imaging of Dynamic Objects |
Farzad Kamalabadi Associate Professor University of Illinois at Urbana-Champaign, USA |
04/07/2007 14h30 Euler bleu |
|
Résumé (anglais) :
In this talk I will address the reconstruction of a physically evolving unknown from tomographic measurements through a state estimation formulation. A motivation for such formulation is spatio-temporal tomographic imaging of the solar corona whereby a time series of white-light and extreme ultraviolet (EUV) images obtained at different solar rotations are used to estimate the global, 3D distribution of density and temperature in the Sun’s corona by computationally solving a tomographic inverse problem. While static tomography captures large-scale structure of the corona, it does not allow treatment of the Sun’s temporal variations. Dynamic tomography, however, enables time-dependent reconstructions by explicitly modeling the temporal evolution.
The approach presented in this talk is the localized ensemble Kalman filter (LEnKF); a Monte Carlo state estimation procedure that is computationally tractable when the state dimension is large. I will describe the conditions under which the LEnKF is equivalent to the Gaussian particle filter. The performance of the LEnKF is evaluated in numerical examples and is shown to give state estimates of almost equal quality as the optimal Kalman filter but at a 95% reduction in computation. I will discuss the implications for tomographic reconstruction of SOHO and STEREO measurements. |
|
Rotation-Invariant Matching of Local Features using Dual-Tree Complex Wavelets |
Nick Kingsbury Reader University of Cambridge, UK |
05/06/2007 14h30 Euler Bleu |
|
Résumé (anglais) :
We describe a technique for using dual-tree complex wavelets to obtain rich feature descriptors of keypoints in images. The main aim has been to develop a method for retaining the full phase and amplitude information from the complex wavelet coefficients at each scale, while presenting the feature descriptors in a form that allows for arbitrary rotations between the candidate and reference image patches. In addition we have modified our previouly proposed approach so that it can be more resilient to errors in keypoint location and scale. Our feature descriptors are potentially very useful for object detection and recognition in images. |
|
Splines, noise, fractals and optimal signal reconstruction |
Michaël Unser Professor EPFL, Switzerland |
21/05/2007 14h30 Coriolis |
|
Résumé (anglais) :
We consider the generalized sampling problem with non-ideal acquisition device. The task is to “optimally” reconstruct a continuously-varying input signal from its discrete, noisy measurements in some integer-shift-invariant space.
We propose three formulations of the problem—variational/Tikhonov, minimax, and minimum mean square error estimation—and derive the corresponding solutions for a given reconstruction space. We prove that these solutions are also globally-optimal, provided that the reconstruction space is matched to the regularization operator (deterministic signal) or, alternatively, to the whitening operator of the process (stochastic modeling). Moreover, the three formulations lead to the same generalized smoothing spline reconstruction algorithm, but only if the reconstruction space is chosen optimally.
We then show that fractional splines and fractal processes (fBm) are solutions of the same type of differential equations, except that the context is different: deterministic versus stochastic. We use this link to provide a solid stochastic justification of spline-based reconstruction algorithms.
Finally, we propose a novel formulation of vector-splines based on similar principles, and demonstrate their application to flow field reconstruction from non-uniform, incomplete ultrasound Doppler data.
This is joint work with Yonina Eldar, Thierry Blu and Muthuvel Arigovindan |
|
A Joint Spatial and Spectral SVM’s Classification of Remote Sensing Images |
Mathieu Fauvel PhD Student GIPSA-lab/Departement Image et Signal, St Martin d'heres |
23/04/2007 14h30 Coriolis |
|
Résumé (anglais) :
The classification of remotely sensed images with very high spatial resolution will be discussed.
The presented method deals with the joint use of the spatial and spectral information provided by the remote sensing data. An adaptive neighborhood system definition will be proposed. Based on morphological area filtering, the spatial information associated to each pixel is modelled as the flat zone to which the pixel belongs, while the spectral information is the (possibly) multidimensional pixel’s vector. Using kernel methods, the spatial and spectral information are jointly used for the classification through a SVMs formulation. Experiments on hyperspectral and panchromatic images will be presented. Experimental results confirm both the suitability of SVM for the classification of hyperspectral data and the usefulness of a joint spectral-spatial classification. |
|
A Multi-Layer MRF Model for Video Object Segmentation |
Zoltan Kato Associate Professor University of Szeged |
17/04/2007 16h00 Coriolis |
|
Résumé (anglais) :
A novel video object segmentation method is proposed which aims at combining color and motion information. The model has a multi-layer structure:
Each feature has its own layer, called {em feature layer}, where a classical Markov random field (MRF) image segmentation model is defined using only the corresponding feature. A special layer is assigned to the combined MRF model, called {em combined layer}, which interacts with each feature layer and provides the segmentation based on the combination of different features. Unlike previous methods, our approach doesn't assume motion boundaries being part of spatial ones. Therefore a very important property of the proposed method is the ability to detect boundaries that are visible only in the motion feature as well as those visible only in the color one. The method is validated on synthetic and real video sequences. |
|
Joint Registration of Geometric and Radiometric Deformations in Images |
Yossi Francos Professor Electrical and Computer Engineering Department, Université Ben-Gurion, Israël |
17/04/2007 14h30 Coriolis |
|
Résumé (anglais) :
We consider the modeling and solution of the problem of registration and recognition of an object, where the observation and template differ both geometrically and radiometrically. Given an observation on one of the known objects, subject to an unknown transformation of it, our goal is to estimate the deformation that transforms some prechosen representation of this object (template) into the current observation.
Even in the absence of radiometric changes, the direct approach for estimating the transformation is impractical as it calls for applying each of the physically possible deformations
to the template, in search for the deformed template that matches the observation. We propose a method that employs a set of non-linear operators to replace the resulting high dimensional search problem by an equivalent linear problem, expressed in terms of the unknown parameters of the transformation model. The proposed solution is unique and is applicable to any continuous coordinate transformation regardless of its magnitude. In the special case where the transformation is a ± ne the solution is shown to be exact.
In the case where the radiometric deformation is modeled by a memoryless non-linear input/output system applied to the amplitude of the signal, we show that the original high-dimensional non-convex search problem, that needs to be solved in order to register the observation to the template, is replaced by an equivalent problem, expressed in terms of a sequence of two linear systems of equations. A solution to this sequence provides a unique and exact solution to the registration problem.
|
|
Object-based image analysis for forest mapping and inventory |
Frieke Van Coillie Senior Researcher Ghent University, Belgium |
26/03/2007 14h30 Coriolis |
|
Résumé (anglais) :
A typical long planning horizon for forest management requires detailed forest inventory information, not only to support current operations, but also to provide a record of past activities and to predict the possible long term outcomes of management decisions. Traditionally, forest inventories are based on field data. With the increased availability of high spatial detail digital imagery (from airborne and satellite platforms), it is hypothesized that semi-automated and computer-assisted interpretation of such imagery may offer an objective and cost effective alternative to acquire forest information.
The use of VHR imagery puts particular demands on image analysis. Conventional methods usually operate on a pixel-by-pixel basis and do not utilize the spatial information present in the image. The fact that pixels are not isolated but knitted into an image full of spatial patterns has been ignored since it could only be exploited by human interpreters. OBIA (Object-Based Image Analysis) has emerged as an alternative to these methods, and is based on the assumption that semi-automated object-based methods can emulate (or exceed) visual interpretation, making better use of spatial information implicit within RS images and providing greater integration with vector based GIS. As OBIA matures, new commercial/research opportunities will exist to tailor object-based solutions for specific fields, disciplines and user needs i.e., for example, forest mapping and inventory.
In this talk, three object-based concepts and methods will be discussed addressing three particular objectives:
- forest mapping
- stand delineation
- stand density estimation
All three methods operate on VHR imagery and combine object-based concepts with image analysis techniques like feature selection with genetic algorithms, neural network classification and wavelet texture analysis.
|
|
Rome wasn't built in one day, but one day its 3D model... |
Luc Van Gool Professor Katholieke Universiteit Leuven, Belgium and ETH Zurich, Switzerland |
26/02/2007 14h30 Coriolis |
|
Résumé (anglais) :
Large-scale initiatives like Google Earth and the growing popularity of GPS-driven navigation systems have rekindled the interested in the accelerated 3D modeling of large scale environments, like cities. In archaeology, emphasis has long been on the visualisation of high-profile monuments, but increasingly voices get loud that more should be done to create an image of the everyday environment of people, i.e. of the more modest, but also vastly more numerous normal dwellings like houses, workshops, etc.
In this talk, an overview is given of some recent 3D city modeling work at Katholieke Universiteit Leuven and ETH Zurich, which is intended to make massive modeling at reasonable cost possible.
This includes the fast image-based 3D modeling of modern cities, as well as the modeling of ancient cities, that have largely disappeared. For the first case, stereo vision is the key ingredient, but also the mixing in of object recognition techniques into the 3D modeling process - what we refer to as `cognitive loops'. The stereo part works on-line, at video rate. The results are then upgraded on the basis of the recognition.
In the case of ancient cities, the remaining, visible information is often restricted to building footprints.
Based on a shape grammar that describes the architectural style of a selected period, buildings can then be reconstructed on a massive scale with the computer, albeit with a much larger uncertainty about their precise shapes and textures. As a case in point, a 3D model of Pompeii has been created, consisting of more than 8000 houses. The Holy Grail for the moment still is the integration of image analysis techniques with grammar based modeling.
|
|
Concrete application of Satellite Technologies in Humanitarian Action The Tuaregs in Northern Mali |
Olivier Longué Director Acción Contra el Hambre, Madrid, Spain |
19/01/2007 14h30 Euler bleu |
|
Résumé (anglais) :
• Local context & Justification of the initiative since 1994
• Presentation of the technologies mobilized in the project
• Management of livestock in a region of over 850 000 km2
• Environmental application in the calculation of biomass
• Early warning system in an extremely vulnerable context
• System limitations |
|
Graph Cuts in Vision: Algorithms and application to motion layer segmentation |
Olivier Juan Post Doc Vision Lab., University of Western Ontario, Canada |
03/01/2007 14h30 Coriolis |
|
Résumé (anglais) :
I will explain how a maxflow formulation can optimally minimize submodular energy functions of binary variables. Then I will review the existing algorithms for Graph Cuts in Vision (Graph Cuts, Banded Graph Cuts, ...) and show their pluses and minuses. Finally I will show an interesting application to Motion Layers Segmentation: segmenting both visible and hidden layers. |
|
|