|
Séminaires
Les séminaires du projet Ariana ont lieu à
l'INRIA Sophia Antipolis (plan),
la salle ainsi que les résumés (en français
ou/et en anglais) étant affichés dès que possible.
Si vous le souhaitez, vous pouvez consulter l'agenda des séminaires
des années précédentes :
2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, et 1998. Anciens séminaires du projet Ariana :
Titre |
Intervenant |
Date/Lieu |
Résumé |
Greedy Algorithms for a Sparse Scalet Decomposition |
A. BIJAOUI Astronomer Observatoire de la Côte d'Azur, Nice |
15/12/2008 14h30 INRIA Sophia Antipolis salle Coriolis |
|
Résumé (anglais) :
Many kinds of sparse decompositions were developed these two last decades mainly in order to optimize the signal or the image compression. In this communication, another insight is given in order to obtain a sparse multiscale decomposition with B-spline scalets. The signal representation is considered as the sum of a sparse decomposition with a baseline. This baseline is such that no significant pattern can be detected from it at each scale, for the given threshold. It is considered as a spurious component which has to be removed. Consequently, the scalet identification is done from the wavelet transform. The coefficient amplitudes are then corrected taking into account the wavelet transform of the scaling functions. Four algorithms were developed. MSMPAT1, correspond to the MultiScale Matching Pursuit with the à Trous algorithm at 1 dimension. The degree of mutual coherence between the pyrels can be reduced taking into account a decimation from one scale to the following on!
e. This leads to a faster algorithm called MSMPPy1, for MultiScale Matching Pursuit with the Pyramidal algorithm at 1 dimension. 2-D versions of these algorithms were built for image analysis. MSMPAT2 and MSMPPy2 are the natural extensions of 1-D algorithms to the two-dimensional field. For these four algorithms a signed decomposition (only positive or negative pyrels) can be done. Experimentations on astronomical images lead to get a gain of about two in sparsity compared to a classical DWT thresholding. A fine denoising results, without a total reduction of wavy artifacts.
These algorithms were developed in the framework of the analysis of multiband astronomical images. The scheme of the analysis will be given. In order to get a complete pattern alphabet, New algortithms based the oriented wavelet transform are in development. |
|
D parametric estimation of stellar loci for quasi-stellar objects identification from photometric data |
Denis Gingras Professor Department of Electrical Engineering and Computer Sciences, Sherbrooke university, Quebec |
20/10/2008 14h30 Coriolis |
|
Résumé (anglais) :
Following a brief introduction to the Université de Sherbrooke and some of its research activities in imaging and image processing, I shall present some preliminary results of an ongoing work at the Syrte laboratory from the Observatoire de Paris, focusing on the detection and identification of quasars from panchromatic or multi-band astronomical CCD images. With astrometric accuracy of astronomical data steadily improving, it is required to get the best reference standards possible to build the International Celestial Reference System (ICRS) in the visible range. This is mandatory nowaday for localisation and identification of any astronomical objects. Because of their very huge distance, quasars constitute ideal quasi inertial landmarks for the realization of such reference systems, however they are for a good many, obstructed by the Milky Way and cannot be directly discriminated from stars in CCD images. Fortunately, quasars distinguish themselves in most cases from other celestial bodies by their spectral signature obtained from spectroscopic observations. However, considering the very high density of celestial objects present in today's astronomical charts, it is not possible to measure the spectral signature of every object detected in CCD images. The approach presented here aims therefore at performing a pre-filtering and identification of potential quasars by using only 3 or 4 spectral bands for the building of a 3D color-index space (difference of magnitudes) for the celestial objects extracted from CCD astronomical images. In this 3D space, we apply an iterative algorithm allowing the parametric estimation of the spindly stellar locus. Indeed, stars regroup themselves in a dense manner and form according to their age and their evolutionary characteristics a spindly locus in 3D color index space. At every locus point estimated, we fit an ellipse allowing to quantify the distribution of stars in a plane perpendicular to the locus and to fix a distance beyond which, the celestial objects will be considered to be potential quasars. Those selected are then subjected to a finer spectrometric analysis for validation, before insertion in ICRS. I shall conclude the talk by presenting some preliminary results from the algorithm and a comparison of performance with other approaches.
|
|
What is a pattern?---A theory of grounded computation |
Hiroshi Ishikawa Associate Professor Department of Information and Biological Sciences, Nagoya City University |
20/10/2008 16h00 Coriolis |
|
Résumé (anglais) :
Computer Science traditionally has dealt only with bits and other symbolic entities. Theories are content with the notion that any other information can be encoded into bits; and in practice such data are indeed converted into bit-streams in an ad hoc way. However, the arbitrariness of encoding makes it unclear what such concepts as information and complexity mean in the case of non-symbolic information, because sometimes the information can be hidden in the encoding itself.
In this talk, we describe a uniform representation of general objects (represented a priori as subsets of a space) that captures their regularities with respect to the structure of the space, which is characterized by a set of maps. It also can represent any computation; giving a notion of computation directly in terms of objects in general spaces, without encoding them into bits first. Most importantly, everything represented is grounded, i.e., how the representation relates to the raw data is encoded as a part of the representation. Thus it does not hide any external structure-by-structure information, other than the maps explicitly given.
Using the representation, we can also define a measure of information so that the smaller the possible representation of the data is, the less information it contains. We can think of pattern discovery as a search for ever smaller representation of given raw data. Finally, the measure turns out to be equivalent to Kolmogorov Complexity, when defined relative to the structure of natural numbers. |
|
Discrete distance-based skeletons for digital shapes in 2D and 3D |
G. BORGEFORS Professor Swedish University of Agricultural Sciences, Uppsala, Sweden |
06/10/2008 14h30 |
|
Résumé (anglais) :
The skeleton of a digital object is a (reversible) representation of the object of lower dimension. A 2D shape is represented by curves, while a 3D object is represented by areas and curves, or just curves. This simplified representation can be useful for many image analysis applications.
There are literary hundreds of skeletonization algorithms to be found in the literature, with different properties. Some make strong assumptions on the properties of the objects: they must be "nice" according to some openly or tacitly assumed rules. Some demand that the digital object in the image is first converted to an object in continuous space, described by some continuous equations. Some are simple but slow, some are complex and fast. Or slow.
I will present a general skeletonization scheme that makes no demands on the object shape and never leaves digital space. It is based on the key concepts "distance transforms", "centres of maximal discs" and "simple points". As long as you have a suitable distance transform, a way to detect its maximal discs and can identify simple points, the methods can be used in any dimension and in any grid. The general method is simple but needs a number of iterations depending on the thickness of the object. Of course there are complications, at least in higher dimensions, but these can be overcome in various ways. I will also talk about some ways to simplify the skeletons.
|
|
FUSION OF MULTIPLE CLASSIFIERS FOR REMOTE SENSING DATA |
J. A. BENEDIKTSSON Professor University of Iceland, Iceland |
22/09/2008 14h30 Coriolis |
|
Résumé (anglais) :
In this talk the use of an ensemble of classifiers or multiple classifiers for classification of remote sensing data will be explored. Furthermore, a strategy for classifying multisensor imagery and hyperspectral imagery will be presented. In this strategy each image “source” is individually classified by a support vector machine (SVM). In decision fusion the outputs of the pre-classification are combined to derive the final class memberships. This fusion is performed by another SVM. The results are compared with well-known parametric and nonparametric classifier methods. The proposed SVM-based fusion approach outperforms all other methods and improves the results of a single SVM that is trained on the whole multisensory/hyperspectral data set. Moreover the results clearly show that the individual image sources provide different information and a multiclassifier approach generally outperforms single-source classifications.
|
|
A Monte-Carlo Approach to Dynamic Image Reconstruction |
Farzad Kamalabadi Associate Professor University of Illinois at Urbana-Champaign, USA |
17/09/2008 14h30 Coriolis |
|
Résumé (anglais) :
The problem of estimating the parameters that describe a complex dynamic system from a collection of indirect,
often incomplete, and imprecise measurements arises, for example, in atmospheric and space remote sensing, biomedical imaging, and economic forecasting. When the problem is formulated with a state-space model, state estimation methods may be used to systematically infer the system parameters from the data. The Kalman filter applies when the state-space model is linear and the prior information Gaussian while the particle filter may be used in more complicated nonlinear and non-Gaussian scenarios. However, these standard statistical signal
processing methods quickly become computationally prohibitive when the number of parameters and the data volume increase, and are thus inapplicable to large-scale computational imaging applications.
This talk focuses on our sequential Monte-Carlo methods for large-scale state estimation and their application to dynamic image reconstruction of time-varying scenes. We derive the convergence properties of our statistical approach and demonstrate the effectiveness of our
methods in a numerical experiment where biased but near-optimal estimates are obtained using only a fraction of the computational effort of standard methods. In addition, we illustrate the use of independent component analysis and manifold learning, techniques used in computer vision and machine learning, for optimally inferring the parameters of the dynamic system in addition to incomplete or unknown parameters of the state-space model. |
|
Data mining and statistics |
Oleg Seleznjev Professor Univ. of Umea, Sweden |
15/07/2008 14h30 Kahn (1 + 2) |
|
Résumé (anglais) :
Data mining, also known as Knowledge Discovery in Databases, or KDD, is a new research and
applications area. Data mining is on the interface of computer science and statistics and aims
at the discovery of useful information from large and complex data sets (or databases) by using
semiautomatic tools. Several aspects have initiated the new research branch in computer science. Complexity and large volume of databases in many applied ¯elds (e.g., scienti¯c and retailing data, health care, ¯nancial and industry, geographical information systems, environmental assessment and planning, searching in WWW, etc.) are the most frequently used arguments. In fact, current technology makes it fairly easy to collect data, but data analysis tends to be slow and expensive. These
problems have changed qualitatively in the 90th, and it was the period when data mining emerged as visible research and development area. Some problems arise which are equally important both for data miners and statisticians. Modern computing hardware and software not only have freed statisticians from the burden of routine calculations, but with such tools non-statisticians started to believe that they carry out analysis without real statistical input. On the other hand, there is a lack of applied methods for e±cient analysis of large data sets. The present relationship between
data mining and statistical communities reminds rather confrontation than cooperation. Thus, the actual question is will data mining and statistics be competitors or allies in data analysis process. We discuss some statistical issues that are relevant to data mining and attempt to identify opportunities where close cooperation (not competition) between KDD and statistics is possible
for further progress in data analysis. Several case studies from data mining applications and examples of data mining tools are discussed. Some concrete examples of climate data analysis are considered in more details with an emphasis to discovering dependencies. |
|
Application of image segmentation techniques to precision agriculture |
Pierre ROUDIER Researcher Infoterra/Cemagref Toulouse |
07/07/2008 14h30 Kahn (2 + 3) |
|
Résumé (anglais) :
Precision agriculture is a wide set of information-based methods that aim at increasing technical, economical and environmental efficiency of the farm. One of these methods, within-field management, is based on the existence of within-field variabilities. Those variabilities are taken into account for crop management by dividing the field into different zones that are managed specifically.
The mass-production of those management zone products poses several significant problems. The first difficulties are the identification of the within-field zones boundaries (as no prototype can be defined a priori), and their optimal number. The classical approach, based on classification techniques, shows some limitations, mostly due to a lack of awareness of the spatial structure of within-field data. Secondly, the emergence of those new agricultural management products underlined the need to characterize the proposed partitions to assess the opportunity of their in-field application.
This talk presents a segmentation-based methodology for within-field zoning management. Morphological segmentation techniques had been adapted to the precision agriculture context, and interfaced with agronomic expert criterion to (i) delineate management zones, and (ii) qualify the performance of the resulting partition. Examples are proposed on remotely-sensed data of continuous (wheat) or discontinuous (grape) cover crops. |
|
Missing Data Recovery by Tight-frame Algorithms with Flexible Wavelet Shrinkage |
Raymond Chan Professor The Chinese Univ. of Hong Kong |
07/07/2008 16h00 Kahn (2 + 3) |
|
Résumé (anglais) :
The recovery of missing data from incomplete data is an essential part of any image processing procedures whether the final image is utilized for visual interpretation or for automatic analysis.
In this talk, we first introduce our tightframe-based iterative algorithm for missing data recovery. By borrowing ideas from anisotropic regularization and diffusion, we can further improve the algorithm to handle edges better. The algorithm falls within the framework of forward-backward splitting methods in convex
analysis and its convergence can hence be established. We illustrate its effectiveness in few main applications in image processing: inpainting, impulse noise removal, super-resolution image reconstruction, and video enhancement. |
|
Comparing Shapes of Curves and Surfaces |
Eric Klassen Professor FSU |
02/07/2008 15h30 Euler Bleu |
|
Résumé (anglais) :
There are many occasions on which it is desirable to compare the shapes of two curves or of two surfaces. For example, the curves might be outlines of images taken from photographs, or backbones of proteins, or cross sections of internal organs or tumors. The surfaces might be human faces or surfaces of brains or other internal organs. To decide how different two of these curves (or surfaces) are, we need a measure of how much bending and/or stretching is required to transform one into the other. This measure should be independent of rigid motions, dilations, or reparametrization. In this talk, we present such a measure for curves in n-dimensional Euclidean space. Our method applies to both open and closed curves. To develop our method, we form a Riemannian manifold of all curves, in which we have divided out by group actions corresponding to rigid motions, dilations, and reparametrization. Using this manifold, we produce geodesics, calculate "average" shapes, and manipulate probability distributions on the set of shapes. We also discuss work in progress on the extension of these ideas from curves to surfaces. |
|
How can a computer learn to see? Machine learning for image categorization and computational pathology |
Joachim BUHMANN Professor ETH Zurich, Switzerland |
16/06/2008 14h30 Kahn (2 + 3) |
|
Résumé (anglais) :
Vision with its grand challenge of general scene understanding
requires hierachically structured, modular representations of image
content. Models for scene understanding also have to capture the
statistical nature of images with their enormeous variability and
semantic richness. Compositionality as a design principle advocates a
representation scheme of image content which detects local parts like
wheels for cars or eyes for faces and composes these information
pieces to combinations of parts in a recursive manner. Graphical
models can express both the probabilistic nature of features as well
as their spatial relations. State-of-the-art categorization results
both for still images and for video are achieved with a significantly
more succinct representation than employed by alternative approaches.
Equally complicated detection problems arise in medical imaging where
e.g. Renal cell carcinoma (RCC) tissue has to be graded on the basis of
immunohistochemical staining to estimate the progression of cancer.
We propose a completely automated image analysis pipeline to predict
the survival of RCC patients based on the analysis of
immunohistochemical staining of MIB-1 on tissue microarrays. A random
forest classifier detects cell nuclei of cancerous cells and predicts
their staining. The application to a test set of 133 patients clearly
demonstrate that our computational pathology analysis matches the
prognostic performance of expert pathologists. |
|
Sequential Monte Carlo for Component Separation in Images |
Ercan KURUOGLU Senior Researcher ISTI-CNR Pisa, Italy |
26/05/2008 14h30 Coriolis |
|
Résumé (anglais) :
Sequential Monte Carlo or Particle Filtering techniques have gained important success in the 2000’s in various signal processing applications involving non-Gaussian, non-stationary signals. In this seminar, we will give a review of the theoretical foundation of sequential Monte Carlo techniques starting from the Bayesian filtering problem and sequential importance sampling. We will then demonstrate the use of sequential Monte Carlo in the source separation problem and summarise our efforts on extending it to two dimensional signals, i.e. images.
The study and the development of the methodologies presented in this talk are motivated by the problem of component separation in astrophysical images. The Wilkinson Microwave Anisotropy Probe (WMAP) has recently provided us with 5th year results and the Planck satellite mission is about to start off causing a bit anticipation of important results on cosmology. The main motivation in the source separation efforts on this data is the recovery of Cosmic Microwave Background radiation which originates from the Big-bang. Once constructed this map will give us much anticipated information on the past, present and the future of our universe. Until our work, the non-stationarity of the non-Galactic sources and the antenna noise had been largely neglected in the community. With the particle filtering techniques, we provide a very flexible framework which can take care of the non-stationarities and potentially can also model the convolutional and nonlinear effects introduced in the measurement process. |
|
Directional textures: From orientation estimation to orientation field characterization |
Christian GERMAIN Associate Professor ENITA Bordeaux |
07/04/2008 14h30 Coriolis |
|
Résumé (anglais) :
Directionality has been identified as one of the three fundamental properties influencing texture recognition along with complexity and periodicity. This talk focuses on directional texture analysis, i.e. on textures that show a spatial arrangement of oriented patterns. The characterization, the classification or the segmentation of directional textures must take into account the property of directionality. For these purposes, the computation and the characterization of orientation maps are essential tasks. Therefore, we will present several operators dedicated to local orientation estimation. Then we will propose to describe the resulting orientation field thanks to second order statistics taking account angular data specificities. The application of such approaches to remote sensing images and material microscopy data will conclude the talk. |
|
Shape-from-Shading: Some Results |
Jean-Denis DUROU Associate Professor IRIT Toulouse |
17/03/2008 14h30 Coriolis |
|
Résumé (anglais) :
The problem of reversing the imaging process, which aims at computing the shape of the imaged scene from one photograph, is referred to as ``3D-reconstruction''.
Many techniques have been designed to solve this problem. Amongst them, the photometric techniques use the relation between the greylevel information and the scene shape. Within the framework of computer vision, the use of this relation in order to compute the scene shape is called ``shape-from-shading''. In this talk, Il will focus on some works relating to different aspects of shape-from-shading. |
|
Quantification in 3-D fluorescence microscopy |
Alain DIETERLEN Professor Lab. MIPS, Université de Haute-Alsace, Mulhouse |
25/02/2008 14h30 Coriolis |
|
Résumé (anglais) :
3-D optical fluorescent microscopy is now an efficient tool for volumic investigation of living biological samples. Developments in instrumentation have permit to beat off the conventional Abbe limit. But in any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information can not be extracted directly from raw data stacks.
If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system, based on vectorial theories we have developed a more accurate model. Besides, we have proposed the use of statistical tools and Zernike moments to describe a 3-D system PSF and to quantify the variation of this PSF as a function of the optical parameters. These first steps toward standardization are helpful to define an acquisition protocol optimizing exploitation of the microscope according to the studied biological sample.
Before extraction of morphological information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is an imperative step in 3-D microscopy; it is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named “bleaching”, inducing an alteration of the information needed for restoration. So, we have developed several tools to pre-process data before application of deconvolution algorithms. For example, under certain assumptions, the decay of intensities can be estimated and used for a partial compensation.
A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software processing is introduction by user of the “best” regularization parameters. We have pointed out that automating the choice of the regularization level facilitates the use; it also greatly improves the reliability of the measurements. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF pre-filtering stabilizes the deconvolution process. We have shown that Zernike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
Currently, the variation of refractive indices induced by the specimen under observation is not taking into account by the restoration process. These limitations lead us to consider information coming from specimen indices cartography. To reach this objective a tomographic diffractive microscope was build in our laboratory, which permits imaging non-labelled transparent or semi-transparent samples. Based on a combination of microholography with a tomographic illumination, our set-up creates 3-D images of the index of refraction distribution within the sample. Firsts combined 3-D microscope images of “Diffraction tomography” and “Fluorescence” show new promising results. |
|
Transform coding of Images: dealing with the problem of contours |
Gianni POGGI Professor University Federico II of Naples, Italy |
28/01/2008 14h30 Coriolis |
|
Résumé (anglais) :
The last two decades have seen enormous advances in the field of image compression, and transforms have played a central role in this success story. A suitable transform concentrates the information content of the image in a small number of coefficients, so that one can use the available bits to encode just those coefficients.
Current trasforms, however, deal poorly with object contours: to represent faithfully a contour one needs a large number of transform coefficients. On the other hand contours MUST be encoded faithfully since they are very important both for human observers and for automatic processing. This talk will briefly describe two recent approaches to this problem, object-based coding and directional transforms. |
|
|