|
Publications of type 'inproceedings'
Result of the query in the list of publications :
245 Conference articles |
11 - A fast multiple birth and cut algorithm using belief propagation. A. Gamal Eldin and X. Descombes and Charpiat G. and J. Zerubia. In Proc. IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, September 2011. Keywords : Multiple Birth and Cut, multiple object extraction, Graph Cut, Belief Propagation.
@INPROCEEDINGS{MBC_ICIP11,
|
author |
= |
{Gamal Eldin, A. and Descombes, X. and G., Charpiat and Zerubia, J.}, |
title |
= |
{A fast multiple birth and cut algorithm using belief propagation}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. IEEE International Conference on Image Processing (ICIP)}, |
address |
= |
{Brussels, Belgium}, |
url |
= |
{http://hal.inria.fr/inria-00592446/fr/}, |
keyword |
= |
{Multiple Birth and Cut, multiple object extraction, Graph Cut, Belief Propagation} |
} |
Abstract :
In this paper, we present a faster version of the newly proposed Multiple Birth and Cut (MBC) algorithm. MBC is an optimization method applied to the energy minimization of an object based model, defined by a marked point process. We show that, by proposing good candidates in the birth step of this algorithm, the speed of convergence is increased. The algorithm starts by generating a dense configuration in a special organization, the best candidates are selected using the belief propagation algorithm. Next, this candidate configuration is combined with the current configuration using binary graph cuts as presented in the original version of the MBC algorithm. We tested the performance of our algorithm on the particular problem of counting flamingos in a colony, and show that it is much faster with the modified birth step. |
|
12 - Formulation contrainte pour la déconvolution de bruit de Poisson. M. Carlavan and L. Blanc-Féraud. In Proc. GRETSI Symposium on Signal and Image Processing, Bordeaux, France, September 2011. Keywords : 3D confocal microscopy, constrained convex optimization, discrepancy principle, Poisson noise.
@INPROCEEDINGS{CarlavanGRETSI11,
|
author |
= |
{Carlavan, M. and Blanc-Féraud, L.}, |
title |
= |
{Formulation contrainte pour la déconvolution de bruit de Poisson}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. GRETSI Symposium on Signal and Image Processing}, |
address |
= |
{Bordeaux, France}, |
url |
= |
{http://hal.inria.fr/inria-00602015/fr/}, |
keyword |
= |
{3D confocal microscopy, constrained convex optimization, discrepancy principle, Poisson noise} |
} |
Résumé :
Nous considérons le problème de la restauration d’image floue et bruitée par du bruit de Poisson. De nombreux travaux ont proposé de traiter ce problème comme la minimisation d’une énergie convexe composée d’un terme d’attache aux données et d’un terme de régularisation choisi selon l’a priori dont on dispose sur l’image à restaurer. Un des problèmes récurrents dans ce type d’approche est le choix du paramètre de régularisation qui contrôle le compromis entre l’attache aux données et la régularisation. Une approche est de choisir ce paramètre de régularisation en procédant à plusieurs minimisations pour plusieurs valeurs du paramètre et en ne gardant que celle qui donne une image restaurée vérifiant un certain critère (qu’il soit qualitatif ou quantitatif). Cette technique est évidemment très couteuse lorsque les données traitées sont de grande dimension, comme c’est le cas en microscopie 3D par exemple. Nous proposons ici de formuler le problème de restauration
d’image floue et bruitée par du bruit de Poisson comme un problème contraint sur l’antilog de la vraisemblance poissonienne et proposons une
estimation de la borne à partir des travaux de Bertero et al. sur le principe de discrepancy pour l’estimation du paramètre de régularisation en présence de bruit de Poisson. Nous montrons des résultats sur des images synthétiques et réelles et comparons avec l'écriture non-contrainte utilisant une approximation gaussienne du bruit de Poisson pour l’estimation du paramètre de régularisation. |
Abstract :
We focus here on the restoration of blurred and Poisson noisy images. Several methods solve this problem by minimizing a convex cost function composed of a data term and a regularizing term chosen from the prior that one have on the image. One of the recurrent problems of this approach is how to choose the regularizing paramater which controls the weight of the regularization term in front of the data term. One method consists in solving the minimization problem for several values of this parameter and by keeping the value which gives an image verifying a quality criterion (either qualitative or quantitative). This technique is obviously time consuming when one deal with high dimensional data such as in 3D microscopy imaging. We propose to formulate the blurred and Poisson noisy images restoration problem as a constrained problem on the antilog of the Poisson likelihood and propose an estimation of the bound from the works of Bertero et al. on the discrepancy principle for the estimation of the regularizing parameter for Poisson noise. We show results on synthetic and real data and we compare these results to the one obtained with the unconstrained formulation using the Gaussian approximation of the Poisson noise for the estimation of the regularizing parameter. |
|
13 - SAR image classification with non- stationary multinomial logistic mixture of amplitude and texture densities. K. Kayabol and A. Voisin and J. Zerubia. In Proc. IEEE International Conference on Image Processing (ICIP), pages 173-176, Brussels, Belgium, September 2011. Keywords : High resolution SAR images, Classification, Texture, Multinomial logistic, Classification EM algorithm.
@INPROCEEDINGS{inria-00592252,
|
author |
= |
{Kayabol, K. and Voisin, A. and Zerubia, J.}, |
title |
= |
{SAR image classification with non- stationary multinomial logistic mixture of amplitude and texture densities}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. IEEE International Conference on Image Processing (ICIP)}, |
pages |
= |
{173-176}, |
address |
= |
{Brussels, Belgium}, |
url |
= |
{http://hal.inria.fr/inria-00592252/en/}, |
keyword |
= |
{High resolution SAR images, Classification, Texture, Multinomial logistic, Classification EM algorithm} |
} |
Abstract :
We combine both amplitude and texture statistics of the Synthetic Aperture Radar (SAR) images using Products of Experts (PoE) approach for classification purpose. We use Nakagami density to model the class amplitudes. To model the textures of the classes, we exploit a non-Gaussian Markov Random Field (MRF) texture model with t-distributed regression error. Non-stationary Multinomial Logistic (MnL) latent class label model is used as a mixture density to obtain spatially smooth class segments. We perform the classification Expectation-Maximization (CEM) algorithm to estimate the class parameters and classify the pixels. We obtained some classification results of water, land and urban areas in both supervised and semi-supervised cases on TerraSAR-X data. |
|
14 - Classification bayésienne supervisée d’images RSO de zones urbaines à très haute résolution. A. Voisin and V. Krylov and J. Zerubia. In Proc. GRETSI Symposium on Signal and Image Processing, Bordeaux, September 2011. Keywords : SAR Images, Classification, Urban areas, Markov Fields, Hierarchical models.
@INPROCEEDINGS{VoisinGretsi2011,
|
author |
= |
{Voisin, A. and Krylov, V. and Zerubia, J.}, |
title |
= |
{Classification bayésienne supervisée d’images RSO de zones urbaines à très haute résolution}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. GRETSI Symposium on Signal and Image Processing}, |
address |
= |
{Bordeaux}, |
url |
= |
{http://hal.inria.fr/inria-00623003/fr/}, |
keyword |
= |
{SAR Images, Classification, Urban areas, Markov Fields, Hierarchical models} |
} |
Résumé :
Ce papier présente un modèle de classification bayésienne supervisée d’images acquises par Radar à Synthèse d’Ouverture (RSO) très haute résolution en polarisation simple contenant des zones urbaines, particulièrement affectées par le bruit de chatoiement. Ce modèle prend en compte à la fois une représentation statistique des images RSO par modèle de mélanges finis et de copules, et une modélisation contextuelle
à partir de champs de Markov hiérarchiques. |
Abstract :
This paper deals with the Bayesian classification of single-polarized very high resolution synthetic aperture radar (SAR) images
that depict urban areas. The difficulty of such a classification relies in the significant effects of speckle noise. The model considered here takes into account both statistical modeling of images via finite mixture models and copulas, and contextual modeling thanks to hierarchical Markov random fields |
|
15 - Restauration d'image dégradée par un flou spatialement variant. S. Ben Hadj and L. Blanc-Féraud. In Proc. GRETSI Symposium on Signal and Image Processing, Bordeaux, France, September 2011.
@INPROCEEDINGS{SaimaGretsi11,
|
author |
= |
{Ben Hadj, S. and Blanc-Féraud, L.}, |
title |
= |
{Restauration d'image dégradée par un flou spatialement variant}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. GRETSI Symposium on Signal and Image Processing}, |
address |
= |
{Bordeaux, France}, |
url |
= |
{http://hal.inria.fr/inria-00625519/fr/}, |
keyword |
= |
{} |
} |
Résumé :
La plupart des techniques de restauration d’images disponibles supposent que le flou est spatialement invariant. Néanmoins, différents
phénomènes physiques liés aux propriétés de l’optique font que les dégradations peuvent être différentes selon les régions de l’image. Dans ce
travail, nous considérons un modèle de PSF invariant par zone avec des transitions régulières entre les zones afin de prendre en compte la
variation du flou dans l’image. Nous développons pour ce modèle, une méthode de déconvolution adaptée, par minimisation d’un critère avec
une régularisation par variation totale. Nous nous appuyions sur une méthode rapide de minimisation par décomposition de domaine qui a été
récemment développée par Fornasier et al., 2009. Nous obtenons ainsi un algorithme où la minimisation du critère est effectuée en parallèle sur
les différentes zones de l’image, tout en prenant en compte les estimées dans les zones voisines des sous-images considérées, de sorte que la
solution finale soit le minimum du critère où le flou est variant spatialement. |
Abstract :
In most of the existing image restoration techniques, the blur is assumed to be spatially invariant. However, different physical
phenomena related to the optic’s properties makes that degradations may change according to the image’s areas. In this work, we consider a
piecewise-varying PSF model with smooth transitions between areas in order to take into account blur variation in the image. We develop for
this model, a convenient deconvolution method by minimizing a criterion with a total variation regularization. We rely on a fast minimization
method using a domain decomposition method that was recently developed by Fornasier et al. 2009. We thus obtain an algorithm where the
criterion minimization is performed in a parallel way on different areas of the image, taking into account the estimates of neighboring areas of
the considered sub-image, so that the final solution is space-varying deconvolved. |
|
16 - Extraction et caractérisation de régions saines et pathologiques à partir de micro-tomographie RX du système vasculaire cérébral. X. Descombes and A. Gamal Eldin and F. Plouraboue and C. Fonta and S. Serduc and G. Le Duc and T. Weitkamp. In Proc. GRETSI Symposium on Signal and Image Processing, Bordeaux, France, September 2011.
@INPROCEEDINGS{XavierGRETSI11,
|
author |
= |
{Descombes, X. and Gamal Eldin, A. and Plouraboue, F. and Fonta, C. and Serduc, S. and Le Duc, G. and Weitkamp, T.}, |
title |
= |
{Extraction et caractérisation de régions saines et pathologiques à partir de micro-tomographie RX du système vasculaire cérébral}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. GRETSI Symposium on Signal and Image Processing}, |
address |
= |
{Bordeaux, France}, |
url |
= |
{http://hal.inria.fr/inria-00625525/fr/}, |
keyword |
= |
{} |
} |
Abstract :
In this paper, we consider X-ray micro-tomography representing the brain vascular network. We define the local vascular territories as the regions obtained after a watershed algorithm applied on the distance map. The obtained graph is then regularized by a Markov random field approach. The optimization is performed using a graph cut algorithm. We show that the resulting segmentation exhibits three classes corresponding to normal tissue, tumour and an intermediate region. |
|
17 - Reconstruction 3D du bâti à partir d'une seule image par naissances et morts multiples. J.D. Durou and X. Descombes and P. Lukashevish and A. Kraushonak. In Proc. GRETSI Symposium on Signal and Image Processing, Bordeaux, France, September 2011.
@INPROCEEDINGS{DurouGretsi11,
|
author |
= |
{Durou, J.D. and Descombes, X. and Lukashevish, P. and Kraushonak, A.}, |
title |
= |
{Reconstruction 3D du bâti à partir d'une seule image par naissances et morts multiples}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. GRETSI Symposium on Signal and Image Processing}, |
address |
= |
{Bordeaux, France}, |
url |
= |
{http://hal.inria.fr/inria-00625527/fr/}, |
keyword |
= |
{} |
} |
Résumé :
Dans cet article, nous nous écartons de l’approche classique qui considère la reconstruction 3D comme un problème inverse et la
résout en mettant en correspondance deux images d’une paire stéréoscopique. Au contraire, nous montrons qu’il est plus simple de résoudre le
problème direct. Pour ce faire, nous proposons aléatoirement des configurations de bâtiments pour ne conserver que les plus pertinentes par un
algorithme de type naissances et morts multiples. Nous montrons notamment que cette approche ne nécessite pas un temps de calcul prohibitif,
grâce à la puissance de calcul d’OpenGL qui s’appuie sur la carte graphique. Les premiers résultats obtenus montrent la pertinence de l’approche
adoptée. En particulier, elle permet de résoudre des ambiguïtés pour lesquelles l’inversion du problème serait quasiment impossible. |
Abstract :
In this paper, contrary to the classical approach addressing the 3D reconstruction problem as an inverse problem and solving it by matching two images from a stereoscopic pair, we show that we can solve the direct problem in a simpler way. To do so, we randomly propose configurations of buildings while keeping only the most relevant ones, using a multiple births and deaths algorithm. Notably, we show that this approach does not imply a prohibitory computation time, thanks to the freeware OpenGL which exploits the graphic card. The first results show that the proposed approach is relevant. In particular, it allows solving ambiguities for which inverting the problem is almost impossible. |
|
18 - Generating compact meshes under planar constraints: an automatic approach for modeling buildings lidar. Y. Verdié and F. Lafarge and J. Zerubia. In Proc. IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, September 2011. Keywords : 3D-Modeling, shape analysis, Mesh processing.
@INPROCEEDINGS{VerdieICIP11,
|
author |
= |
{Verdié, Y. and Lafarge, F. and Zerubia, J.}, |
title |
= |
{Generating compact meshes under planar constraints: an automatic approach for modeling buildings lidar}, |
year |
= |
{2011}, |
month |
= |
{September}, |
booktitle |
= |
{Proc. IEEE International Conference on Image Processing (ICIP)}, |
address |
= |
{Brussels, Belgium}, |
url |
= |
{http://hal.inria.fr/inria-00605623/fr/}, |
keyword |
= |
{3D-Modeling, shape analysis, Mesh processing} |
} |
Abstract :
We present an automatic approach for modeling buildings from aerial LiDAR data. The method produces accurate, watertight and compact meshes under planar constraints which are especially designed for urban scenes. The LiDAR point cloud is classified through a non-convex energy minimization problem in order to separate the points labeled as building. Roof structures are then extracted from this point subset, and used to control the meshing procedure. Experiments highlight the potential of our method in term of minimal rendering, accuracy and compactness |
|
19 - Morphological road segmentation in urban areas from high resolution satellite images. R. Gaetano and J. Zerubia and G. Scarpa and G. Poggi. In International Conference on Digital Signal Processing, Corfu, Greece, July 2011. Keywords : Segmentation, Classification, skeletonization , pattern recognition, shape analysis.
@INPROCEEDINGS{GaetanoDSP,
|
author |
= |
{Gaetano, R. and Zerubia, J. and Scarpa, G. and Poggi, G.}, |
title |
= |
{Morphological road segmentation in urban areas from high resolution satellite images}, |
year |
= |
{2011}, |
month |
= |
{July}, |
booktitle |
= |
{International Conference on Digital Signal Processing}, |
address |
= |
{Corfu, Greece}, |
url |
= |
{http://hal.inria.fr/inria-00618222/fr/}, |
keyword |
= |
{Segmentation, Classification, skeletonization , pattern recognition, shape analysis} |
} |
Abstract :
High resolution satellite images provided by the last generation
sensors significantly increased the potential of almost
all the image information mining (IIM) applications related
to earth observation. This is especially true for the extraction
of road information, task of primary interest for many remote
sensing applications, which scope is more and more extended
to complex urban scenarios thanks to the availability of highly
detailed images. This context is particularly challenging due
to such factors as the variability of road visual appearence
and the occlusions from entities like trees, cars and shadows.
On the other hand, the peculiar geometry and morphology of
man-made structures, particularly relevant in urban areas, is
enhanced in high resolution images, making this kind of information
especially useful for road detection.
In this work, we provide a new insight on the use of morphological
image analysis for road extraction in complex urban
scenarios, and propose a technique for road segmentation
that only relies on this domain. The keypoint of the technique
is the use of skeletons as powerful descriptors for road objects:
the proposed method is based on an ad-hoc skeletonization
procedure that enhances the linear structure of road segments,
and extracts road objects by first detecting their skeletons
and then associating each of them with a region of the
image. Experimental results are presented on two different
high resolution satellite images of urban areas. |
|
20 - Regularizing parameter estimation for Poisson noisy image restoration. M. Carlavan and L. Blanc-Féraud. In International ICST Workshop on New Computational Methods for Inverse Problems, Paris, France, May 2011. Keywords : Parameter estimation, discrepancy principle, Poisson noise.
@INPROCEEDINGS{NCMIP11,
|
author |
= |
{Carlavan, M. and Blanc-Féraud, L.}, |
title |
= |
{Regularizing parameter estimation for Poisson noisy image restoration}, |
year |
= |
{2011}, |
month |
= |
{May}, |
booktitle |
= |
{International ICST Workshop on New Computational Methods for Inverse Problems}, |
address |
= |
{Paris, France}, |
url |
= |
{http://hal.inria.fr/inria-00590906/fr/}, |
keyword |
= |
{Parameter estimation, discrepancy principle, Poisson noise} |
} |
Abstract :
Deblurring images corrupted by Poisson noise is a challeng- ing process which has devoted much research in many ap- plications such as astronomical or biological imaging. This problem, among others, is an ill-posed problem which can be regularized by adding knowledge on the solution. Several methods have therefore promoted explicit prior on the im- age, coming along with a regularizing parameter to moder- ate the weight of this prior. Unfortunately, in the domain of Poisson deconvolution, only a few number of methods have been proposed to select this regularizing parameter which is most of the time set manually such that it gives the best visual results. In this paper, we focus on the use of l1 -norm prior and present two methods to select the regularizing pa- rameter. We show some comparisons on synthetic data using classical image fidelity measures. |
|
top of the page
These pages were generated by
|