Main

Works

Main.Works History

Hide minor edits - Show changes to output

March 01, 2019, at 03:21 PM by 194.254.113.83 -
Added lines 20-21:

||%width=700% [[https://hal.archives-ouvertes.fr/hal-01834227/document|http://www-sop.inria.fr/members/Alexis.Joly/loss.png]]  ||
March 01, 2019, at 03:13 PM by 194.254.113.83 -
Changed lines 29-30 from:
[[https://hal.archives-ouvertes.fr/hal-01629149/document|[+'''[Trans. on Multimedia 2017]'''+]]]\\
to:
[[https://hal.archives-ouvertes.fr/hal-01629149/document|[+'''[Author version]'''+]]]
[[https://ieeexplore.ieee.org/document/7819540|[+'''[Transanctions on Multimedia
]'''+]]]\\
March 01, 2019, at 03:11 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]] 
[[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]\\
to:
[[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]\\
Changed lines 22-23 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Author version]'''+]]]\\
to:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Author version]'''+]]]
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]\\
March 01, 2019, at 03:10 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]] [[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]\\
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]
[[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]\\
March 01, 2019, at 03:10 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]] [[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]\\
Changed lines 22-23 from:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Author version]'''+]]]
to:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Author version]'''+]]]\\
March 01, 2019, at 03:09 PM by 194.254.113.83 -
Added lines 4-5:
[[https://hal.archives-ouvertes.fr/hal-01834227/document|[+'''[Author version]'''+]]]
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]
Changed lines 22-24 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer version]'''+]]]\\

[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal version]'''+]]]
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer book]'''+]]]
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Author version]'''+]]]
March 01, 2019, at 03:08 PM by 194.254.113.83 -
Changed lines 5-11 from:
purposes. Given a set of species occurrence, the aim is to infer its spatial distribution over a given
territory. Because of the limited number of occurrences of specimens, this is usually achieved
through environmental niche modeling approaches, i.e. by predicting the distribution in the
geographic space on the basis of a mathematical representation of their known distribution in
environmental space (= realized ecological niche). The environment is in most cases represented
by climate data (such as temperature, and precipitation), but other variables such as soil type or

land cover can also be used. In this paper, we propose a deep learning approach to the problem in
to:
purposes. The environment is in most cases represented by climate data (such as temperature, and precipitation) and other variables such as soil type or land cover can also be used. In this paper, we propose a deep learning approach to the problem in
March 01, 2019, at 03:05 PM by 194.254.113.83 -
Added lines 3-24:
!! A deep learning approach to Species Distribution Modelling
Species distribution models (SDM) are widely used for ecological research and conservation
purposes. Given a set of species occurrence, the aim is to infer its spatial distribution over a given
territory. Because of the limited number of occurrences of specimens, this is usually achieved
through environmental niche modeling approaches, i.e. by predicting the distribution in the
geographic space on the basis of a mathematical representation of their known distribution in
environmental space (= realized ecological niche). The environment is in most cases represented
by climate data (such as temperature, and precipitation), but other variables such as soil type or
land cover can also be used. In this paper, we propose a deep learning approach to the problem in
order to improve the predictive effectiveness. Non-linear prediction models have been of interest for
SDM for more than a decade but our study is the first one bringing empirical evidence that deep,
convolutional and multilabel models might participate to resolve the limitations of SDM. Indeed,
the main challenge is that the realized ecological niche is often very different from the theoretical
fundamental niche, due to environment perturbation history, species propagation constraints and
biotic interactions. Thus, the realized abundance in the environmental feature space can have a
very irregular shape that can be difficult to capture with classical models. Deep neural networks
on the other side, have been shown to be able to learn complex non-linear transformations in
a wide variety of domains. Moreover, spatial patterns in environmental variables often contains
useful information for species distribution but are usually not considered in classical models. Our
study shows empirically how convolutional neural networks efficiently use this information and
improve prediction performance.

Changed line 28 from:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal version]'''+]]]\\
to:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal version]'''+]]]
March 01, 2019, at 03:01 PM by 194.254.113.83 -
Added line 5:
March 01, 2019, at 03:00 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Editor version (Springer)]'''+]]]\\
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Springer version]'''+]]]\\
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal version
]'''+]]]\\
March 01, 2019, at 03:00 PM by 194.254.113.83 -
Changed line 5 from:
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal version]'''+]]]\\
to:
March 01, 2019, at 03:00 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://www.springer.com/us/book/9783319764443|[+'''[Editor version]'''+]]]\\
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Editor version (Springer)]'''+]]]\\
[[https://hal-lirmm.ccsd.cnrs.fr/lirmm-01959343/document|[+'''[Hal
version]'''+]]]\\
March 01, 2019, at 02:58 PM by 194.254.113.83 -
Deleted line 4:
[[|[+'''[HAL version]'''+]]]\\
March 01, 2019, at 02:58 PM by 194.254.113.83 -
Changed lines 4-5 from:
[[https://hal.inria.fr/hal-01182797/document|[+'''[HDRThesis2015]'''+]]] '''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' \\
to:
[[https://www.springer.com/us/book/9783319764443|[+'''[Editor version]'''+]]]\\
[[|[+
'''[HAL version]'''+]]]\\
March 01, 2019, at 02:57 PM by 194.254.113.83 -
Changed line 7 from:
||%width=150% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||This edited volume focuses on the latest and most impactful advancements of multimedia data globally available for environmental and earth biodiversity. The data reflects the status, behavior, change as well as human interests and concerns which are increasingly crucial for understanding environmental issues and phenomena. This volume addresses the need for the development of advanced methods, techniques and tools for collecting, managing, analyzing, understanding and modeling environmental & biodiversity data, including the automated or collaborative species identification, the species distribution modeling and their environment, such as the air quality or the bio-acoustic monitoring.  Researchers and practitioners in multimedia and environmental topics will find the chapters essential to their continued studies.
to:
||%width=100% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||This edited volume focuses on the latest and most impactful advancements of multimedia data globally available for environmental and earth biodiversity. The data reflects the status, behavior, change as well as human interests and concerns which are increasingly crucial for understanding environmental issues and phenomena. This volume addresses the need for the development of advanced methods, techniques and tools for collecting, managing, analyzing, understanding and modeling environmental & biodiversity data, including the automated or collaborative species identification, the species distribution modeling and their environment, such as the air quality or the bio-acoustic monitoring.  Researchers and practitioners in multimedia and environmental topics will find the chapters essential to their continued studies.
March 01, 2019, at 02:57 PM by 194.254.113.83 -
Changed line 7 from:
||%width=150% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=150% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||This edited volume focuses on the latest and most impactful advancements of multimedia data globally available for environmental and earth biodiversity. The data reflects the status, behavior, change as well as human interests and concerns which are increasingly crucial for understanding environmental issues and phenomena. This volume addresses the need for the development of advanced methods, techniques and tools for collecting, managing, analyzing, understanding and modeling environmental & biodiversity data, including the automated or collaborative species identification, the species distribution modeling and their environment, such as the air quality or the bio-acoustic monitoring.  Researchers and practitioners in multimedia and environmental topics will find the chapters essential to their continued studies.
March 01, 2019, at 02:56 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=150% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:56 PM by 194.254.113.83 -
Changed line 3 from:
!! Large-scale Content-based Visual Information Retrieval      
to:
!! Multimedia Tools and Applications for Environmental & Biodiversity Informatics   
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445.jpeg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://www.springer.com/us/book/9783319764443|https://images-na.ssl-images-amazon.com/images/I/519yVLqVRwL.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:52 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.jpeg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445.jpeg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:51 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.jpeg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:51 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:51 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0.jpg]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:50 PM by 194.254.113.83 -
Added lines 2-7:

!! Large-scale Content-based Visual Information Retrieval     
[[https://hal.inria.fr/hal-01182797/document|[+'''[HDRThesis2015]'''+]]] '''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' \\

|| border=0
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://media.springernature.com/w306/springer-static/cover-hires/book/978-3-319-76445-0]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
March 01, 2019, at 02:48 PM by 194.254.113.83 -
Changed line 7 from:
||%width=600% [[https://hal.archives-ouvertes.fr/hal-01629149/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]  ||
to:
||%width=700% [[https://hal.archives-ouvertes.fr/hal-01629149/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]  ||
March 01, 2019, at 02:46 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]  ||
to:
||%width=600% [[https://hal.archives-ouvertes.fr/hal-01629149/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]  ||
March 01, 2019, at 02:46 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]  ||
March 01, 2019, at 02:45 PM by 194.254.113.83 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|]]
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
March 01, 2019, at 02:38 PM by 193.49.108.68 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://drive.google.com/file/d/1nCyzWyzWO2O17tTiJr2XZmEcRxNwR9JX/view?usp=sharing]]
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|]]
March 01, 2019, at 02:38 PM by 193.49.108.68 -
Changed line 7 from:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|https://drive.google.com/file/d/1nCyzWyzWO2O17tTiJr2XZmEcRxNwR9JX/view?usp=sharing]]
March 01, 2019, at 02:21 PM by 193.49.108.68 -
Added lines 6-7:

||%width=300% [[https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/tpg.png]]
March 01, 2019, at 02:18 PM by 193.49.108.68 -
Changed line 5 from:
In classical crowdsourcing frameworks, the labels correspond to well known or easy-to-learn concepts so that it is straightforward to train the annotators by giving a few examples with known answers. Neither is true when there are thousands of complex domain-specific labels. The originality of this work is to focus on such annotations that usually require hard expert knowledge (such as plant species names, architectural styles, medical diagnostic tags, etc.). We consider that common knowledge is not sufficient to perform the task but any people can be taught to recognize a small subset of domain-specific concepts. In such a context, it is best to take advantage of the various capabilities of each annotator through teaching (annotators can enhance their knowledge), assignment (annotators can be focused on tasks they have the knowledge to complete) and inference (different annotator propositions can be aggregated to enhance labeling quality). In this work, is a set of theoretical contributions and data-driven algorithms to allow the crowdsourcing of thousands of specialized labels thanks to the pro-active training of the annotators. The framework relies on deep learning, variational Bayesian inference and task assignment to adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. To achieve live experiments, the whole framework has been implemented as a serious game available on the web ([[www.theplantgame.com|www.theplantgame.com]]).
to:
In classical crowdsourcing frameworks, the labels correspond to well known or easy-to-learn concepts so that it is straightforward to train the annotators by giving a few examples with known answers. Neither is true when there are thousands of complex domain-specific labels. The originality of this work is to focus on such annotations that usually require hard expert knowledge (such as plant species names, architectural styles, medical diagnostic tags, etc.). We consider that common knowledge is not sufficient to perform the task but any people can be taught to recognize a small subset of domain-specific concepts. In such a context, it is best to take advantage of the various capabilities of each annotator through teaching (annotators can enhance their knowledge), assignment (annotators can be focused on tasks they have the knowledge to complete) and inference (different annotator propositions can be aggregated to enhance labeling quality). In this work, is a set of theoretical contributions and data-driven algorithms to allow the crowdsourcing of thousands of specialized labels thanks to the pro-active training of the annotators. The framework relies on deep learning, variational Bayesian inference and task assignment to adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. To achieve live experiments, the whole framework has been implemented as a serious game available on the web ([[http://www.theplantgame.com| ThePlantGame]]).
March 01, 2019, at 02:18 PM by 193.49.108.68 -
Changed line 5 from:
In classical crowdsourcing frameworks, the labels correspond to well known or easy-to-learn concepts so that it is straightforward to train the annotators by giving a few examples with known answers. Neither is true when there are thousands of complex domain-specific labels. The originality of this work is to focus on such annotations that usually require hard expert knowledge (such as plant species names, architectural styles, medical diagnostic tags, etc.). We consider that common knowledge is not sufficient to perform the task but any people can be taught to recognize a small subset of domain-specific concepts. In such a context, it is best to take advantage of the various capabilities of each annotator through teaching (annotators can enhance their knowledge), assignment (annotators can be focused on tasks they have the knowledge to complete) and inference (different annotator propositions can be aggregated to enhance labeling quality). In this work, is a set of theoretical contributions and data-driven algorithms to allow the crowdsourcing of thousands of specialized labels thanks to the pro-active training of the annotators. The framework relies on deep learning, variational Bayesian inference and task assignment to adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. To achieve live experiments, the whole framework has been implemented as a serious game available on the web (www.theplantgame.com).
to:
In classical crowdsourcing frameworks, the labels correspond to well known or easy-to-learn concepts so that it is straightforward to train the annotators by giving a few examples with known answers. Neither is true when there are thousands of complex domain-specific labels. The originality of this work is to focus on such annotations that usually require hard expert knowledge (such as plant species names, architectural styles, medical diagnostic tags, etc.). We consider that common knowledge is not sufficient to perform the task but any people can be taught to recognize a small subset of domain-specific concepts. In such a context, it is best to take advantage of the various capabilities of each annotator through teaching (annotators can enhance their knowledge), assignment (annotators can be focused on tasks they have the knowledge to complete) and inference (different annotator propositions can be aggregated to enhance labeling quality). In this work, is a set of theoretical contributions and data-driven algorithms to allow the crowdsourcing of thousands of specialized labels thanks to the pro-active training of the annotators. The framework relies on deep learning, variational Bayesian inference and task assignment to adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. To achieve live experiments, the whole framework has been implemented as a serious game available on the web ([[www.theplantgame.com|www.theplantgame.com]]).
March 01, 2019, at 02:17 PM by 193.49.108.68 -
Added line 4:
[[https://hal.archives-ouvertes.fr/hal-01629149/document|[+'''[Trans. on Multimedia 2017]'''+]]]\\
March 01, 2019, at 02:15 PM by 193.49.108.68 -
Added lines 2-5:

!! Crowdsourcing Thousands of Specialized Labels: a Bayesian active training approach
In classical crowdsourcing frameworks, the labels correspond to well known or easy-to-learn concepts so that it is straightforward to train the annotators by giving a few examples with known answers. Neither is true when there are thousands of complex domain-specific labels. The originality of this work is to focus on such annotations that usually require hard expert knowledge (such as plant species names, architectural styles, medical diagnostic tags, etc.). We consider that common knowledge is not sufficient to perform the task but any people can be taught to recognize a small subset of domain-specific concepts. In such a context, it is best to take advantage of the various capabilities of each annotator through teaching (annotators can enhance their knowledge), assignment (annotators can be focused on tasks they have the knowledge to complete) and inference (different annotator propositions can be aggregated to enhance labeling quality). In this work, is a set of theoretical contributions and data-driven algorithms to allow the crowdsourcing of thousands of specialized labels thanks to the pro-active training of the annotators. The framework relies on deep learning, variational Bayesian inference and task assignment to adapt to the skills of each annotator both in the questions asked and the weights given to their answers. The underlying judgements are Bayesian, based on adaptive priors. To achieve live experiments, the whole framework has been implemented as a serious game available on the web (www.theplantgame.com).

February 10, 2016, at 09:29 AM by 193.49.108.68 -
Deleted lines 38-40:


[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]  [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
February 10, 2016, at 09:28 AM by 193.49.108.68 -
Changed line 33 from:
[[http://dl.acm.org/citation.cfm?id=1992049 | [+'''[ICMR2011]'''+] ]]  [[https://hal.inria.fr/hal-00755696/document | [+'''[ACM-MM2012]'''+] ]]\\
to:
[[http://dl.acm.org/citation.cfm?id=1992049 | [+'''[ICMR2011]'''+] ]]  [[https://hal.inria.fr/hal-00755696/document | '''[+[ACM-MM2012]+]''' ]]\\
February 10, 2016, at 09:28 AM by 193.49.108.68 -
Changed line 33 from:
[[http://dl.acm.org/citation.cfm?id=1992049 | [+'''[ICMR2011]'''+] ]]  [[https://hal.inria.fr/hal-00755696/document | '''[ACM-MM2012]''' ]]\\
to:
[[http://dl.acm.org/citation.cfm?id=1992049 | [+'''[ICMR2011]'''+] ]]  [[https://hal.inria.fr/hal-00755696/document | [+'''[ACM-MM2012]'''+] ]]\\
February 10, 2016, at 09:27 AM by 193.49.108.68 -
Changed lines 33-37 from:
[[https://hal.inria.fr/hal-00755696/document | '''[ACM-MM2012]''' ]]  [[https://hal.inria.fr/hal-00755696/document| '''[ACM-MM2012]''' ]]\\


Our
objects mining and retrieval techniques were integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 and obtained a grant. The movie presented during this event is available here:
to:
[[http://dl.acm.org/citation.cfm?id=1992049 | [+'''[ICMR2011]'''+] ]]  [[https://hal.inria.fr/hal-00755696/document | '''[ACM-MM2012]''' ]]\\
A Phd student of mine ([[https://who.rocq.inria.fr/Mohamed.Trad/|Riadh Trad]]) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[http://dl.acm.org/citation.cfm?id=1992049|pdf]].

Besides, our
objects mining and retrieval techniques were integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 and obtained a grant. The movie presented during this event is available here:
Changed line 40 from:
Besides, another Phd student of mine ([[https://who.rocq.inria.fr/Mohamed.Trad/|Riadh Trad]]) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[http://dl.acm.org/citation.cfm?id=1992049|pdf]].
to:
February 10, 2016, at 09:25 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document | '''ACM-MM2012''' ]]  [[https://hal.inria.fr/hal-00755696/document| '''ACM-MM2012''' ]]\\
to:
[[https://hal.inria.fr/hal-00755696/document | '''[ACM-MM2012]''' ]]  [[https://hal.inria.fr/hal-00755696/document| '''[ACM-MM2012]''' ]]\\
February 10, 2016, at 09:24 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document | ACM-MM2012 ]]  [[https://hal.inria.fr/hal-00755696/document| ACM-MM2012 ]]\\
to:
[[https://hal.inria.fr/hal-00755696/document | '''ACM-MM2012''' ]]  [[https://hal.inria.fr/hal-00755696/document| '''ACM-MM2012''' ]]\\
February 10, 2016, at 09:24 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''\[ACM-MM2012\]'''+]]]  [[https://hal.inria.fr/hal-00755696/document|[+'''\[ACM-MM2012\]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document | ACM-MM2012 ]]  [[https://hal.inria.fr/hal-00755696/document| ACM-MM2012 ]]\\
February 10, 2016, at 09:23 AM by 193.49.108.68 -
Changed lines 33-34 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]] || [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''\[ACM-MM2012\]'''+]]]  [[https://hal.inria.fr/hal-00755696/document|[+'''\[ACM-MM2012\]'''+]]]\\

Changed line 41 from:
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]  [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
February 10, 2016, at 09:21 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]], [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]] || [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
February 10, 2016, at 09:20 AM by 193.49.108.68 -
Changed lines 33-34 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]   [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]], [[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
Changed line 40 from:
to:
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
February 10, 2016, at 09:20 AM by 193.49.108.68 -
Changed lines 33-34 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]  [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
Changed line 40 from:
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
to:
February 10, 2016, at 09:19 AM by 193.49.108.68 -
Changed line 34 from:
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
to:
Added line 40:
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
February 10, 2016, at 09:18 AM by 193.49.108.68 -
Changed lines 33-34 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]]\\
[[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
February 10, 2016, at 09:17 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]
February 10, 2016, at 09:17 AM by 193.49.108.68 -
Changed line 33 from:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]\\
February 10, 2016, at 09:16 AM by 193.49.108.68 -
February 10, 2016, at 09:15 AM by 193.49.108.68 -
Changed lines 33-34 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|[+'''[ACM-MM2012]'''+]]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
Our objects mining and retrieval techniques were integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]] and obtained a grant. The movie presented during this event is available here:
to:
[[https://hal.inria.fr/hal-00755696/document|[+'''[ACM-MM2012]'''+]]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
Our objects mining and retrieval techniques were integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 and obtained a grant. The movie presented during this event is available here:
February 10, 2016, at 09:13 AM by 193.49.108.68 -
Changed lines 33-34 from:
Our objects mining technique was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]]. The movie presented during this event is available here:
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|[+'''[ACM-MM2012]'''+]]] [[http://dl.acm.org/citation.cfm?id=1992049|[+'''[ICMR2011]'''+]]]\\
Our objects mining and retrieval techniques were integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows
the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]] and obtained a grant. The movie presented during this event is available here:
February 10, 2016, at 09:11 AM by 193.49.108.68 -
February 10, 2016, at 09:10 AM by 193.49.108.68 -
Changed line 21 from:
%width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
%width=500% [[http://www.lifeclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
February 10, 2016, at 09:09 AM by 193.49.108.68 -
Changed lines 20-21 from:
The data collected through this workflow is used each year since 2011 in the [[http://www.imageclef.org/|ImageCLEF]] and [[http://www.lifeclef.org/|LifeCLEF]] evaluation campaigns that I am coordinating: %width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
The data collected through this workflow is used each year in the [[http://www.lifeclef.org/|LifeCLEF]] evaluation campaign that I am coordinating since 2011:
%width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
February 10, 2016, at 09:07 AM by 193.49.108.68 -
Changed lines 20-21 from:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign where I am chair of the plant id task:
%width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
The data collected through this workflow is used each year since 2011 in the [[http://www.imageclef.org/|ImageCLEF]] and [[http://www.lifeclef.org/|LifeCLEF]] evaluation campaigns that I am coordinating: %width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
February 09, 2016, at 06:26 PM by 128.93.176.76 -
Changed line 16 from:
[[http://www.sciencedirect.com/science/article/pii/S157495411300071X/pdfft?md5=de3d18df0c81b471c2a1abc00d17a08e&pid=1-s2.0-S157495411300071X-main.pdf|[+'''[EcologicalInformatics2013]'''+]]]\\
to:
[[http://www.sciencedirect.com/science/article/pii/S157495411300071X/pdfft?md5=de3d18df0c81b471c2a1abc00d17a08e&pid=1-s2.0-S157495411300071X-main.pdf|[+'''[EcologicalInformatics2014]'''+]]]\\
February 09, 2016, at 06:25 PM by 128.93.176.76 -
Added line 40:
[[https://www.robots.ox.ac.uk/~vgg/rg/papers/bmvc2012__litayem__hash_based.pdf|[+'''[BMVC2012]'''+]]]\\
Deleted line 44:
February 09, 2016, at 06:24 PM by 128.93.176.76 -
Changed line 24 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|[+'''[ACM-MM2012]'''+]]]
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|[+'''[ACM-MM2012]'''+]]]\\
Added lines 43-44:
[[https://hal.inria.fr/hal-00642178/document|[+'''[CVPR2011]'''+]]]\\
February 09, 2016, at 06:22 PM by 128.93.176.76 -
Changed line 24 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|ACM-MM2012]]
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|[+'''[ACM-MM2012]'''+]]]
February 09, 2016, at 06:21 PM by 128.93.176.76 -
Changed line 24 from:
[[ACMMM2012|pdf]]
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|ACM-MM2012]]
February 09, 2016, at 06:20 PM by 128.93.176.76 -
Changed line 24 from:
[[ACM-MM2012|pdf]]
to:
[[ACMMM2012|pdf]]
February 09, 2016, at 06:20 PM by 128.93.176.76 -
Deleted line 9:
Added line 24:
[[ACM-MM2012|pdf]]
February 09, 2016, at 06:19 PM by 128.93.176.76 -
Changed line 14 from:
%width=500% http://www-sop.inria.fr/members/Alexis.Joly/equation.png
to:
%width=400% http://www-sop.inria.fr/members/Alexis.Joly/equation.png
February 09, 2016, at 06:19 PM by 128.93.176.76 -
Changed line 14 from:
%width=300% http://www-sop.inria.fr/members/Alexis.Joly/equation.png
to:
%width=500% http://www-sop.inria.fr/members/Alexis.Joly/equation.png
February 09, 2016, at 06:19 PM by 128.93.176.76 -
Added line 14:
%width=300% http://www-sop.inria.fr/members/Alexis.Joly/equation.png
February 09, 2016, at 06:15 PM by 128.93.176.76 -
Deleted line 5:
Changed line 7 from:
||%width=300% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
to:
||%width=300% [[https://hal.inria.fr/hal-01182797/document|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
February 09, 2016, at 06:15 PM by 128.93.176.76 -
Changed lines 8-9 from:
||%width=300% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system.
The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
to:
||%width=300% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors. The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user.
February 09, 2016, at 06:14 PM by 128.93.176.76 -
Changed line 8 from:
||%width=400% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system.
to:
||%width=300% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system.
February 09, 2016, at 06:14 PM by 128.93.176.76 -
Changed lines 8-9 from:
||%width=200% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
to:
||%width=400% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system.
The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 06:13 PM by 128.93.176.76 -
Changed lines 8-12 from:
||%width=120% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]

|| border=0
||%width=200%  [[http://www
.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index
, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.||
to:
||%width=200% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 06:13 PM by 128.93.176.76 -
Changed line 8 from:
||%width=120% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/PlantNet.png]]  ||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]
to:
||%width=120% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]  ||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]
February 09, 2016, at 06:12 PM by 128.93.176.76 -
Changed lines 8-9 from:
||%width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]
to:
||%width=120% [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|http://www-sop.inria.fr/members/Alexis.Joly/PlantNet.png]]  ||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]
February 09, 2016, at 06:12 PM by 128.93.176.76 -
February 09, 2016, at 06:11 PM by 128.93.176.76 -
Added lines 5-9:


|| border=0
||%width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
||'''Plantnet [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iphone]] and [[https://play.google.com/store/apps/details?id=org.plantnet&hl=en|android]] app''': an image sharing and retrieval application for the identification of plants. It is developed in the context of the [[http://www.plantnet-project.org/|Pl@ntNet]] project by scientists from four French research organisations (INRIA, Cirad, INRA, IRD) and the members of [[http://www.tela-botanica.org/|Tela Botanica]] social network with the financial support of [[http://www.agropolis.fr/|Agropolis fondation]]. Among other features, this free app helps identifying plant species from photographs, through a visual search engine using several of my works (Large-scale matching, A posteriori multi-probe, RMMH). Pl@ntNet is now on [[https://www.facebook.com/pages/Plantnet/488732104545546|Facebook]] and [[https://twitter.com/PlantNetProject|Twitter]]
February 09, 2016, at 06:10 PM by 128.93.176.76 -
Changed lines 8-10 from:
||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.

 
to:
||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.||
February 09, 2016, at 06:10 PM by 128.93.176.76 -
Changed lines 5-7 from:
||%lfloat% %width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
to:

|| border=0
||
%width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
Added lines 9-11:

 

February 09, 2016, at 06:09 PM by 128.93.176.76 -
Changed lines 5-6 from:
%lfloat% %width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
to:
||%lfloat% %width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
||Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 06:08 PM by 128.93.176.76 -
Changed line 5 from:
%rfloat%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
to:
%lfloat% %width=200%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
February 09, 2016, at 06:07 PM by 128.93.176.76 -
February 09, 2016, at 06:06 PM by 128.93.176.76 -
Changed line 5 from:
%rfloat%  [[http://www.imageclef.org/2013/plant|archi.png]]
to:
%rfloat%  [[http://www.imageclef.org/2013/plant|http://www-sop.inria.fr/members/Alexis.Joly/archi.png]]
February 09, 2016, at 06:06 PM by 128.93.176.76 -
Changed line 5 from:
%rfloat width=500% [[http://www.imageclef.org/2013/plant|archi.png]]
to:
%rfloat%  [[http://www.imageclef.org/2013/plant|archi.png]]
February 09, 2016, at 05:59 PM by 128.93.176.76 -
Added line 5:
%rfloat width=500% [[http://www.imageclef.org/2013/plant|archi.png]]
February 09, 2016, at 05:57 PM by 128.93.176.76 -
Changed line 9 from:
[[http://dl.acm.org/citation.cfm?id=2749328&dl=ACM&coll=DL&CFID=317093993&CFTOKEN=43472768|[+'''[ICMR2015]'''+]]]
to:
[[http://dl.acm.org/citation.cfm?id=2749328&dl=ACM&coll=DL&CFID=317093993&CFTOKEN=43472768|[+'''[ICMR2015]'''+]]]\\
February 09, 2016, at 05:57 PM by 128.93.176.76 -
Changed line 10 from:
to:
This paper introduces a new image representation relying on the spatial pooling of geometrically consistent visual matches. We therefore introduce a new match kernel based on the inverse rank of the shared nearest neighbors combined with local geometric constraints. To avoid overfitting and reduce processing costs, the dimensionality of the resulting over-complete representation is further reduced by hierarchically pooling the raw consistent matches according to their spatial position in the training images. The final image representation is obtained by concatenating the resulting feature vectors at several resolutions. Learning from these representations using a logistic regression classifier is shown to provide excellent fine-grained classification performances outperforming the results reported in the literature on several classification tasks.
February 09, 2016, at 05:56 PM by 128.93.176.76 -
Added lines 7-9:

!! Kernelizing Spatially Consistent Visual Matches
[[http://dl.acm.org/citation.cfm?id=2749328&dl=ACM&coll=DL&CFID=317093993&CFTOKEN=43472768|[+'''[ICMR2015]'''+]]]
February 09, 2016, at 05:53 PM by 128.93.176.76 -
Changed line 4 from:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[+'''[HDRThesis2015]'''+]]]\\
to:
[[https://hal.inria.fr/hal-01182797/document|[+'''[HDRThesis2015]'''+]]] '''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' \\
February 09, 2016, at 05:53 PM by 128.93.176.76 -
February 09, 2016, at 05:52 PM by 128.93.176.76 -
Changed line 10 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[EcologicalInformatics2013]'''+]]]\\
to:
[[http://www.sciencedirect.com/science/article/pii/S157495411300071X/pdfft?md5=de3d18df0c81b471c2a1abc00d17a08e&pid=1-s2.0-S157495411300071X-main.pdf|[+'''[EcologicalInformatics2013]'''+]]]\\
February 09, 2016, at 05:51 PM by 128.93.176.76 -
Changed line 10 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[pdf]'''+]]]\\
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[EcologicalInformatics2013]'''+]]]\\
February 09, 2016, at 05:50 PM by 128.93.176.76 -
Changed line 4 from:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[+'''[pdf]'''+]]]\\
to:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[+'''[HDRThesis2015]'''+]]]\\
February 09, 2016, at 05:50 PM by 128.93.176.76 -
Changed line 10 from:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[pdf]'''+]]]
to:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[pdf]'''+]]]\\
February 09, 2016, at 05:49 PM by 128.93.176.76 -
Added line 10:
[[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|[+'''[pdf]'''+]]]
February 09, 2016, at 05:49 PM by 128.93.176.76 -
Changed line 4 from:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]\\
to:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[+'''[pdf]'''+]]]\\
February 09, 2016, at 05:49 PM by 128.93.176.76 -
Changed lines 3-4 from:
!! Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]
'''
HDR habilitation (highest French academic qualification) - defended the 26/05/2015'''\\
to:
!! Large-scale Content-based Visual Information Retrieval     
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015''' [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]\\
February 09, 2016, at 05:48 PM by 128.93.176.76 -
Changed line 3 from:
!! HDR habilitation: Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]
to:
!! Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]
February 09, 2016, at 05:48 PM by 128.93.176.76 -
Changed line 5 from:
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This thesis describes several of my works related to large-scale content-based information retrieval. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
to:
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This [[https://hal.inria.fr/hal-01182797/document|thesis]] describes several of my works related to this domain. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 05:47 PM by 128.93.176.76 -
Changed line 5 from:
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. Such methods have been intensively studied in the multimedia community to allow managing the massive amount of raw multimedia documents created every day (e.g. video will account to 84% of U.S. internet traffic by 2018). Recent years have consequently witnessed a consistent growth of content-aware and multi-modal search engines deployed on massive multimedia data. Popular multimedia search applications such as Google images, Youtube, Shazam, Tineye or MusicID clearly demonstrated that the first generation of large-scale audio-visual search technologies is now mature enough to be deployed on real-world big data. All these successful applications did greatly benefit from 15 years of research on multimedia analysis and efficient content-based indexing techniques. Yet the maturity reached by the first generation of content-based search engines does not preclude an intensive research activity in the field. There is actually still a lot of hard problems to be solved before we can retrieve any information in images or sounds as easily as we do in text documents. Content-based search methods actually have to reach a finer understanding of the contents as well as a higher semantic level. This requires modeling the raw signals by more and more complex and numerous features, so that the algorithms for analyzing, indexing and searching such features have to evolve accordingly. This thesis describes several of my works related to large-scale content-based information retrieval. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
to:
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. This thesis describes several of my works related to large-scale content-based information retrieval. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 05:46 PM by 128.93.176.76 -
Changed line 3 from:
!! HDR habilitation: Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-[pdf]-]]]
to:
!! HDR habilitation: Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-'''[pdf]'''-]]]
February 09, 2016, at 05:45 PM by 128.93.176.76 -
Changed line 3 from:
!! HDR habilitation on Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|'''[pdf]''']]
to:
!! HDR habilitation: Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|[-[pdf]-]]]
February 09, 2016, at 05:44 PM by 128.93.176.76 -
Changed line 3 from:
!! Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|'''[pdf]''']]
to:
!! HDR habilitation on Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|'''[pdf]''']]
February 09, 2016, at 05:41 PM by 128.93.176.76 -
February 09, 2016, at 05:40 PM by 128.93.176.76 -
Changed line 3 from:
!! Large-scale Content-based Visual Information Retrieval [[https://hal.inria.fr/hal-01182797/document|['''pdf''']]]
to:
!! Large-scale Content-based Visual Information Retrieval      [[https://hal.inria.fr/hal-01182797/document|'''[pdf]''']]
February 09, 2016, at 05:39 PM by 128.93.176.76 -
Changed line 3 from:
!! Large-scale Content-based Visual Information Retrieval [[https://hal.inria.fr/hal-01182797/document|PDF]]
to:
!! Large-scale Content-based Visual Information Retrieval [[https://hal.inria.fr/hal-01182797/document|['''pdf''']]]
February 09, 2016, at 05:39 PM by 128.93.176.76 -
Changed line 3 from:
!! Large-scale Content-based Visual Information Retrieval
to:
!! Large-scale Content-based Visual Information Retrieval [[https://hal.inria.fr/hal-01182797/document|PDF]]
February 09, 2016, at 05:38 PM by 128.93.176.76 -
Changed line 4 from:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015'''
to:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015'''\\
February 09, 2016, at 05:38 PM by 128.93.176.76 -
Changed lines 4-5 from:
(HDR habilitation) highest French academic qualification - defended the 26/05/2015
to:
'''HDR habilitation (highest French academic qualification) - defended the 26/05/2015'''
Rather than restricting search to the use of metadata, content-based information retrieval methods attempt to index, search and browse digital objects by means of signatures or features describing their actual content. Such methods have been intensively studied in the multimedia community to allow managing the massive amount of raw multimedia documents created every day (e.g. video will account to 84% of U.S. internet traffic by 2018). Recent years have consequently witnessed a consistent growth of content-aware and multi-modal search engines deployed on massive multimedia data. Popular multimedia search applications such as Google images, Youtube, Shazam, Tineye or MusicID clearly demonstrated that the first generation of large-scale audio-visual search technologies is now mature enough to be deployed on real-world big data. All these successful applications did greatly benefit from 15 years of research on multimedia analysis and efficient content-based indexing techniques. Yet the maturity reached by the first generation of content-based search engines does not preclude an intensive research activity in the field. There is actually still a lot of hard problems to be solved before we can retrieve any information in images or sounds as easily as we do in text documents. Content-based search methods actually have to reach a finer understanding of the contents as well as a higher semantic level. This requires modeling the raw signals by more and more complex and numerous features, so that the algorithms for analyzing, indexing and searching such features have to evolve accordingly. This thesis describes several of my works related to large-scale content-based information retrieval. The different contributions are presented in a bottom-up fashion reflecting a typical three-tier software architecture of an end-to-end multimedia information retrieval system. The lowest layer is only concerned with managing, indexing and searching large sets of high-dimensional feature vectors, whatever their origin or role in the upper levels (visual or audio features, global or part-based descriptions, low or high semantic level, etc. ). The middle layer rather works at the document level and is in charge of analyzing, indexing and searching collections of documents. It typically extracts and embeds the low-level features, implements the querying mechanisms and post-processes the results returned by the lower layer. The upper layer works at the applicative level and is in charge of providing useful and interactive functionalities to the end-user. It typically implements the front-end of the search application, the crawler and the orchestration of the different indexing and search services.
February 09, 2016, at 05:37 PM by 128.93.176.76 -
Changed line 8 from:
! Interactive plant identification based on social images
to:
!! Interactive plant identification based on social images
Changed line 15 from:
! Scalable Mining of Small Visual Objects
to:
!! Scalable Mining of Small Visual Objects
Changed line 23 from:
! Visual based Event Mining
to:
!! Visual based Event Mining
Changed line 30 from:
! Hash-based SVM approximation
to:
!! Hash-based SVM approximation
Changed line 33 from:
! Random Maximum Margin Hashing
to:
!! Random Maximum Margin Hashing
Changed line 40 from:
! Logo retrieval with a contrario visual query expansion
to:
!! Logo retrieval with a contrario visual query expansion
Changed line 50 from:
! Interactive objects retrieval with efficient boosting
to:
!! Interactive objects retrieval with efficient boosting
Changed line 55 from:
! Multidimensional Hashing
to:
!! High-dimensional Hashing
Changed line 64 from:
! Content-based Video Copy Detection
to:
!! Content-based Video Copy Detection
Changed line 77 from:
! Visual Local Features
to:
!! Visual Local Features
Changed line 83 from:
* Dissociated dipoles [[http://portal.acm.org/citation.cfm?id=1282280.1282362| [DIPOLES07] ]]
to:
!! Dissociated dipoles [[http://portal.acm.org/citation.cfm?id=1282280.1282362| [DIPOLES07] ]]
Changed line 88 from:
* Density-based selection of local features
to:
!! Density-based selection of local features
February 09, 2016, at 05:36 PM by 128.93.176.76 -
Changed line 3 from:
! Large-scale Content-based Visual Information Retrieval
to:
!! Large-scale Content-based Visual Information Retrieval
February 09, 2016, at 05:35 PM by 128.93.176.76 -
Changed line 3 from:
!Large-scale Content-based Visual Information Retrieval
to:
! Large-scale Content-based Visual Information Retrieval
February 09, 2016, at 05:35 PM by 128.93.176.76 -
Changed lines 3-4 from:
!Large-scale Content-based Visual Information Retrieval (HDR habilitation)
highest French academic qualification - defended the 26/05/2015
to:
!Large-scale Content-based Visual Information Retrieval
(HDR habilitation) highest French academic qualification - defended the 26/05/2015
February 09, 2016, at 05:35 PM by 128.93.176.76 -
Changed lines 3-5 from:
!Large-scale Content-based Visual Information Retrieval (HDR habilitation (highest French academic qualification) - defended the 26/05/2015)
to:
!Large-scale Content-based Visual Information Retrieval (HDR habilitation)
highest
French academic qualification - defended the 26/05/2015
February 09, 2016, at 05:34 PM by 128.93.176.76 -
Added lines 2-4:

!Large-scale Content-based Visual Information Retrieval (HDR habilitation (highest French academic qualification) - defended the 26/05/2015)

February 09, 2016, at 03:53 PM by 193.49.108.68 -
Added lines 1-2:
(:notitle:)
July 23, 2013, at 05:19 PM by 128.93.176.4 -
Changed line 5 from:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign where I co-organize the plant identification task:
to:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign where I am chair of the plant id task:
July 23, 2013, at 05:19 PM by 128.93.176.4 -
Changed line 5 from:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign.
to:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign where I co-organize the plant identification task:
July 23, 2013, at 05:18 PM by 128.93.176.4 -
Changed lines 5-7 from:
The growing data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign.
%width=500% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
The data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign.
%width=500% [[http://www.imageclef.org/2013/plant|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
July 23, 2013, at 05:18 PM by 128.93.176.4 -
Changed lines 6-7 from:
%width=600% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
%width=500% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
July 23, 2013, at 05:17 PM by 128.93.176.4 -
Changed lines 6-7 from:
%width=800% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
%width=600% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
July 23, 2013, at 05:17 PM by 128.93.176.4 -
Changed lines 6-7 from:
[[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
to:
%width=800% [[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]
July 23, 2013, at 05:17 PM by 128.93.176.4 -
Changed lines 5-6 from:
The growing data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaigns.
to:
The growing data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaign.
[[http://www.imageclef.org/|http://www.imageclef.org/system/files/bannerImageCLEF2013PlantTaskMini.png]]

July 23, 2013, at 05:16 PM by 128.93.176.4 -
Changed line 2 from:
Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs ([[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|pdf]]) contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
to:
Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs ([[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|pdf]]) contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear). At the time of writing, the whole framework covers about half of
Added lines 5-6:
The growing data collected through this workflow is used each year since 2011 in [[http://www.imageclef.org/|ImageCLEF]] evaluation campaigns.
July 23, 2013, at 05:13 PM by 128.93.176.4 -
Changed line 2 from:
Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
to:
Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs ([[http://www-sop.inria.fr/members/Alexis.Joly/maed022s-goeau.pdf|pdf]]) contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
July 23, 2013, at 05:11 PM by 128.93.176.4 -
Changed lines 2-3 from:
Speeding up the collection and integration of raw botanical observation data is a crucial step towards a sustainable development of agriculture and the conservation of biodiversity. Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative
workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
to:
Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
July 23, 2013, at 05:11 PM by 128.93.176.4 -
Changed line 1 from:
! Interactive plant identification based on social image data
to:
! Interactive plant identification based on social images
July 23, 2013, at 05:10 PM by 128.93.176.4 -
Added lines 1-5:
! Interactive plant identification based on social image data
Speeding up the collection and integration of raw botanical observation data is a crucial step towards a sustainable development of agriculture and the conservation of biodiversity. Initiated in the context of a citizen sciences project, the main contribution of this work is an innovative collaborative
workflow focused on image-based plants identi cation as a mean to enlist new contributors and facilitate access to botanical data. Since 2010, hundreds of thousands of geo-tagged and dated plant photographs were collected and revised by hundreds of novice, amateur and expert botanists of a specialized social network. An image-based identi cation tool - available as both a [[http://identify.plantnet-project.org/en/base/tree|web]]  and an [[https://itunes.apple.com/en/app/plantnet/id600547573?mt=8|iPhone application]] - is synchronized with that growing data and allows any user to query or enrich the system with new observations. An important originality is that it works with up to five di fferent organs contrarily to previous approaches that mainly relied on the leaf. This allows querying the system at any period of the year and with complementary images composing a plant observation. Extensive experiments of the visual search engine show that it is already very helpful to determine a plant among hundreds or thousands of species (to appear) . At the time of writing, the whole framework covers about half of
the plant species living in France (3500 species), which already makes it the widest existing automated identi cation tool.

July 23, 2013, at 05:03 PM by 128.93.176.4 -
Changed lines 14-15 from:
Besides, another Phd student of mine ([[https://who.rocq.inria.fr/Mohamed.Trad/|Riadh Trad]]) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[xhttp://dl.acm.org/citation.cfm?id=1992049|pdf]].
to:
Besides, another Phd student of mine ([[https://who.rocq.inria.fr/Mohamed.Trad/|Riadh Trad]]) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[http://dl.acm.org/citation.cfm?id=1992049|pdf]].
July 23, 2013, at 05:01 PM by 128.93.176.4 -
Changed lines 2-4 from:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. In this work [[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|pdf]], we formally revisited the problem of mining or discovering such objects, and introduced a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted both on [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos/FlickrBelgaLogos.html|FlickrBelgaLogo]] dataset and on millions of images shows the efficiency of our method.\\
Applying this technique to web images allows to suggest trustful hyper-visual links to the user and finally allows him to browse the web in a radically new way as illustrated in this video:
to:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. In this work [[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|pdf]], we formally revisited the problem of mining or discovering such objects, and introduced a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted both on [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos/FlickrBelgaLogos.html|FlickrBelgaLogo]] dataset and on millions of images shows the efficiency of our method. Applying this technique to web images allows to suggest trustful hyper-visual links to the user and finally allows him to browse the web in a radically new way as illustrated in this video:
July 23, 2013, at 05:01 PM by 128.93.176.4 -
Changed lines 15-16 from:
Besides, another Phd student of mine (https://who.rocq.inria.fr/Mohamed.Trad/) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[xhttp://dl.acm.org/citation.cfm?id=1992049|pdf]].
to:
Besides, another Phd student of mine ([[https://who.rocq.inria.fr/Mohamed.Trad/|Riadh Trad]]) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[xhttp://dl.acm.org/citation.cfm?id=1992049|pdf]].
July 23, 2013, at 05:01 PM by 128.93.176.4 -
Changed lines 11-13 from:

This method
was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]]. The movie presented during this event is available here:
to:
Our objects mining technique was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]]. The movie presented during this event is available here:
Added lines 15-16:
Besides, another Phd student of mine (https://who.rocq.inria.fr/Mohamed.Trad/) did work on visual-based event retrieval and discovery in social data (Flickr images). He built a new event records matching technique making use of both the visual content and the social context [[xhttp://dl.acm.org/citation.cfm?id=1992049|pdf]].
July 23, 2013, at 04:55 PM by 128.93.176.4 -
Added line 11:
Changed lines 14-15 from:
[[http://www.otmedia.fr/?p=217|http://www.otmedia.fr/wp-content/uploads/2012/11/OTMediaPres001-300x166.jpg]]
to:
%width=200% [[http://www.otmedia.fr/?p=217|http://www.otmedia.fr/wp-content/uploads/2012/11/OTMediaPres001-300x166.jpg]]
July 23, 2013, at 04:53 PM by 128.93.176.4 -
Changed lines 5-6 from:
%width=200% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
to:
%width=200% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
 
July 23, 2013, at 04:53 PM by 128.93.176.4 -
Changed lines 4-5 from:
%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
to:

%width=200% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
July 23, 2013, at 04:52 PM by 128.93.176.4 -
Changed lines 4-5 from:

||
%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
to:
%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
July 23, 2013, at 04:51 PM by 128.93.176.4 -
Changed lines 3-6 from:
|| border=0
||%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www
-sop.inria.fr/members/Alexis.Joly/bieres.png]]  ||  ||[+'''Small objects query suggestion in a large web-image collection''' +], Developed within %height=30%[[http://www.otmedia.fr/|[+'''OTMedia'''+]]] project, accepted demo at ACM MM 201, based on the following publications of my Phd students: [[http://link.springer.com/article/10.1007/s11042-012-1340-5|MTAP-2013]], [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/letessier2012Scalable.pdf|ACM-MM-2012]]

to:
Applying this technique to web images allows to suggest trustful hyper-visual links to the user and finally allows him to browse the web in a radically new way as illustrated in this video:

||
%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]
This new search paradigm is published in 
[[http://link.springer.com/article/10.1007/s11042-012-1340-5|MTAP-2013]] and will be demonstrated at ACM MM 2013. 

July 23, 2013, at 04:46 PM by 128.93.176.4 -
Added lines 3-7:
|| border=0
||%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]  ||  ||[+'''Small objects query suggestion in a large web-image collection''' +], Developed within %height=30%[[http://www.otmedia.fr/|[+'''OTMedia'''+]]] project, accepted demo at ACM MM 201, based on the following publications of my Phd students: [[http://link.springer.com/article/10.1007/s11042-012-1340-5|MTAP-2013]], [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/letessier2012Scalable.pdf|ACM-MM-2012]]


! Visual based Event Mining
July 23, 2013, at 04:45 PM by 128.93.176.4 -
Deleted lines 2-4:
|| border=0
||%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]  ||  ||[+'''Small objects query suggestion in a large web-image collection''' +], Developed within %height=30%[[http://www.otmedia.fr/|[+'''OTMedia'''+]]] project, accepted demo at ACM MM 201, based on the following publications of my Phd students: [[http://link.springer.com/article/10.1007/s11042-012-1340-5|MTAP-2013]], [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/letessier2012Scalable.pdf|ACM-MM-2012]]

July 23, 2013, at 04:44 PM by 128.93.176.4 -
Added lines 3-5:
|| border=0
||%width=250% [[https://www.youtube.com/watch?v=M3X2WSNKcAQ|http://www-sop.inria.fr/members/Alexis.Joly/bieres.png]]  ||  ||[+'''Small objects query suggestion in a large web-image collection''' +], Developed within %height=30%[[http://www.otmedia.fr/|[+'''OTMedia'''+]]] project, accepted demo at ACM MM 201, based on the following publications of my Phd students: [[http://link.springer.com/article/10.1007/s11042-012-1340-5|MTAP-2013]], [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/letessier2012Scalable.pdf|ACM-MM-2012]]

November 22, 2012, at 04:33 PM by 128.93.176.29 -
Changed lines 8-9 from:
We addressed the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing [[http://www-sop.inria.fr/members/Alexis.Joly/bmvc_final.pdf|pdf].  Whereas the mainstream work in the field is focused on training classifiers on huge amount of data, less efforts are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated collections ? In this work, we propose building space-and-time-efficient hash-based classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.
to:
We addressed the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing [[http://www-sop.inria.fr/members/Alexis.Joly/bmvc_final.pdf|pdf]].  Whereas the mainstream work in the field is focused on training classifiers on huge amount of data, less efforts are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated collections ? In this work, we propose building space-and-time-efficient hash-based classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.
November 22, 2012, at 04:32 PM by 128.93.176.29 -
Changed lines 7-9 from:
! High-dimensional data hashing
High dimensional data hashing is essential for scaling up and distributing data analysis applications involving feature-rich objects, such as text documents, images or multi
-modal entities (scientific observations, events, etc.).  We recently investigated the use of high dimensional hashing methods for efficiently approximating K-NN graphs [[http://dl.acm.org/citation.cfm?id=2324847|pdf]], particularly in distributed environments. We highlighted the importance of balancing issues on the performance of such approaches and show why the baseline approach using Locality Sensitive Hashing does not perform well. Our new KNN-join method is based on RMMH, a hash function family based on randomly trained classifiers that we introduced in 2011. We show that the resulting hash tables are much more balanced and that the number of resulting collisions can be greatly reduced without degrading quality. We further improve the load balancing of our distributed approach by designing a parallelized local join algorithm, implemented within the MapReduce framework. In another work [[http://www-sop.inria.fr/members/Alexis.Joly/bmvc_final.pdf|pdf], we addressed the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing.  Whereas the mainstream work in the field is focused on training classifiers on huge amount of data, less efforts are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated collections ? In this work, we propose building efficient hash-based classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.
to:
! Hash-based SVM approximation
We addressed the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing [[http://www
-sop.inria.fr/members/Alexis.Joly/bmvc_final.pdf|pdf].  Whereas the mainstream work in the field is focused on training classifiers on huge amount of data, less efforts are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated collections ? In this work, we propose building space-and-time-efficient hash-based classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.
Changed lines 11-12 from:
RMMH is a new hashing function aimed at embedding high dimensional feature spaces in compact and indexable hash codes. Several data dependent hash functions have been proposed recently to closely fit data distribution and provide better selectivity than usual random projections such as LSH. However, improvements occur only for relatively small hash code sizes up to 64 or 128 bits. As discussed in the paper, this is mainly due to the lack of independence between the produced hash functions. RMMH attempts to solve this issue in any kernel space. Rather than boosting the collision probability of close points, our method focus on data scattering. By training purely random splits of the data, regardless the closeness of the training samples, it is indeed possible to generate consistently more independent hash functions. On the other side, the use of large margin classifiers allows to maintain good generalization performances. Experiments show that our new Random Maximum Margin Hashing scheme (RMMH) outperforms four state-of-the-art hashing methods, notably in kernel spaces.
to:
RMMH is a new hashing function aimed at embedding high dimensional feature spaces in compact and indexable hash codes. Several data dependent hash functions have been proposed recently to closely fit data distribution and provide better selectivity than usual random projections such as LSH. However, improvements occur only for relatively small hash code sizes up to 64 or 128 bits. As discussed in the paper, this is mainly due to the lack of independence between the produced hash functions. RMMH attempts to solve this issue in any kernel space. Rather than boosting the collision probability of close points, our method focus on data scattering. By training purely random splits of the data, regardless the closeness of the training samples, it is indeed possible to generate consistently more independent hash functions. On the other side, the use of large margin classifiers allows to maintain good generalization performances. Experiments show that our new Random Maximum Margin Hashing scheme (RMMH) outperforms four state-of-the-art hashing methods, notably in kernel spaces. 
Added lines 15-16:
We recently investigated the use of RMMH for efficiently approximating K-NN graphs [[http://dl.acm.org/citation.cfm?id=2324847|pdf]], particularly in distributed environments. We highlighted the importance of balancing issues on the performance of such approaches and show why the baseline approach using Locality Sensitive Hashing does not perform well.
November 22, 2012, at 04:28 PM by 128.93.176.29 -
Changed lines 7-8 from:

to:
! High-dimensional data hashing
High dimensional data hashing is essential for scaling up and distributing data analysis applications involving feature-rich objects, such as text documents, images or multi-modal entities (scientific observations, events, etc.).  We recently investigated the use of high dimensional hashing methods for efficiently approximating K-NN graphs [[http://dl.acm.org/citation.cfm?id=2324847|pdf]], particularly in distributed environments. We highlighted the importance of balancing issues on the performance of such approaches and show why the baseline approach using Locality Sensitive Hashing does not perform well. Our new KNN-join method is based on RMMH, a hash function family based on randomly trained classifiers that we introduced in 2011. We show that the resulting hash tables are much more balanced and that the number of resulting collisions can be greatly reduced without degrading quality. We further improve the load balancing of our distributed approach by designing a parallelized local join algorithm, implemented within the MapReduce framework. In another work [[http://www-sop.inria.fr/members/Alexis.Joly/bmvc_final.pdf|pdf], we addressed the problem of speeding-up the prediction phase of linear Support Vector Machines via Locality Sensitive Hashing.  Whereas the mainstream work in the field is focused on training classifiers on huge amount of data, less efforts are spent on the counterpart scalability issue: how to apply big trained models efficiently on huge non annotated collections ? In this work, we propose building efficient hash-based classifiers that are applied in a first stage in order to approximate the exact results and filter the hypothesis space. Experiments performed with millions of one-against-one classifiers show that the proposed hash-based classifier can be more than two orders of magnitude faster than the exact classifier with minor losses in quality.

November 21, 2012, at 05:34 PM by 128.93.176.29 -
Changed line 1 from:
! Scalable Mining of Visual Objects
to:
! Scalable Mining of Small Visual Objects
November 21, 2012, at 05:33 PM by 128.93.176.29 -
Changed line 2 from:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. Whereas efficient methods now exist for searching rigid objects in large collections, discovering them from scratch is still challenging in terms of scalability, particularly when the targeted objects are rather small. In this work [[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|pdf]], we formally revisited the problem of mining or discovering such objects, and then generalized two kinds of existing methods for probing candidate object seeds: weighted adaptive sampling and hashing based methods. We then introduce a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted on millions of images show that our method outperforms state-of-the-art.\\
to:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. In this work [[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|pdf]], we formally revisited the problem of mining or discovering such objects, and introduced a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted both on [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos/FlickrBelgaLogos.html|FlickrBelgaLogo]] dataset and on millions of images shows the efficiency of our method.\\
November 21, 2012, at 05:30 PM by 128.93.176.29 -
Changed line 1 from:
! Scalable Mining of Small Visual Objects
to:
! Scalable Mining of Visual Objects
November 21, 2012, at 05:30 PM by 128.93.176.29 -
Changed lines 3-4 from:
This method was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 \cite{}. The movie presented during this event is available here:
to:
This method was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 [[http://www-sop.inria.fr/members/Pierre.Letessier/pdf/gcp017-joly.pdf|pdf]]. The movie presented during this event is available here:
November 21, 2012, at 05:28 PM by 128.93.176.29 -
Changed lines 4-6 from:


to:
[[http://www.otmedia.fr/?p=217|http://www.otmedia.fr/wp-content/uploads/2012/11/OTMediaPres001-300x166.jpg]]


November 21, 2012, at 05:27 PM by 128.93.176.29 -
Changed lines 2-4 from:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. Whereas efficient methods now exist for searching rigid objects in large collections, discovering them from scratch is still challenging in terms of scalability, particularly when the targeted objects are rather small. In this work \cite{letessier:hal-00739735}, we formally revisited the problem of mining or discovering such objects, and then generalized two kinds of existing methods for probing candidate object seeds: weighted adaptive sampling and hashing based methods. We then introduce a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted on millions of images show that our method outperforms state-of-the-art.\\
This method was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 \cite{}. The movie presented during this event is available at \url{http://www.otmedia.fr/?p=217}.
to:
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. Whereas efficient methods now exist for searching rigid objects in large collections, discovering them from scratch is still challenging in terms of scalability, particularly when the targeted objects are rather small. In this work [[http://www-sop.inria.fr/members/Alexis.Joly/fp021-letessier.pdf|pdf]], we formally revisited the problem of mining or discovering such objects, and then generalized two kinds of existing methods for probing candidate object seeds: weighted adaptive sampling and hashing based methods. We then introduce a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted on millions of images show that our method outperforms state-of-the-art.\\
This method was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 \cite{}. The movie presented during this event is available here:


November 21, 2012, at 05:25 PM by 128.93.176.29 -
Added lines 1-4:
! Scalable Mining of Small Visual Objects
Automatically linking multimedia documents that contain one or several instances of the same visual object has many applications including: salient events detection, relevant patterns discovery in scientific data or simply web browsing through hyper-visual links. Whereas efficient methods now exist for searching rigid objects in large collections, discovering them from scratch is still challenging in terms of scalability, particularly when the targeted objects are rather small. In this work \cite{letessier:hal-00739735}, we formally revisited the problem of mining or discovering such objects, and then generalized two kinds of existing methods for probing candidate object seeds: weighted adaptive sampling and hashing based methods. We then introduce a new hashing strategy, working first at the visual level, and then at the geometric level. Experiments conducted on millions of images show that our method outperforms state-of-the-art.\\
This method was integrated within a visual-based media event detection system in the scope of a French project called the transmedia observatory. It allows the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). The main originality of the detection is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient trans-media events. This work was presented at ACM Multimedia Grand Challenge 2012 \cite{}. The movie presented during this event is available at \url{http://www.otmedia.fr/?p=217}.

October 23, 2012, at 03:21 PM by 193.49.107.56 -
Changed lines 41-42 from:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-rocq.inria.fr/~ajoly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-sop.inria.fr/members/Alexis.Joly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-sop.inria.fr/members/Alexis.Joly/dailymotion.jpeg]]||
to:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-sop.inria.fr/members/Alexis.Joly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-sop.inria.fr/members/Alexis.Joly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-sop.inria.fr/members/Alexis.Joly/dailymotion.jpeg]]||
October 23, 2012, at 03:20 PM by 193.49.107.56 -
Changed line 22 from:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-sop.inria.fr/members/Alexis.Joly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is somehow inspired by our previous works on space-filling curve based hashing (see [[http://www-sop.inria.fr/members/Alexis.Joly/ajolyMIR05vf.pdf|this one]] or [[ajoly_TMA07.pdf|this one]] for example) and improves upon
to:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-sop.inria.fr/members/Alexis.Joly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is somehow inspired by our previous works on space-filling curve based hashing (see [[http://www-sop.inria.fr/members/Alexis.Joly/ajolyMIR05vf.pdf|this one]] or [[http://www-sop.inria.fr/members/Alexis.Joly/ajoly_TMA07.pdf|this one]] for example) and improves upon
Changed lines 35-36 from:
I am currently less involved on research issues about CBCD but still strongly implied in benchmarking (more info [[http://www-rocq.inria.fr/~ajoly/index.php?n=Main.ContentBasedCopyDetectionBenchmarking |here]]).
to:
I am currently less involved on research issues about CBCD but still strongly implied in benchmarking (more info [[http://www-sop.inria.fr/members/Alexis.Joly/index.php?n=Main.ContentBasedCopyDetectionBenchmarking |here]]).
Changed lines 41-42 from:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-rocq.inria.fr/~ajoly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-sop.inria.fr/members/Alexis.Joly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-rocq.inria.fr/~ajoly/dailymotion.jpeg]]||
to:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-rocq.inria.fr/~ajoly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-sop.inria.fr/members/Alexis.Joly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-sop.inria.fr/members/Alexis.Joly/dailymotion.jpeg]]||
Changed lines 47-48 from:
%center% http://www-rocq.inria.fr/~ajoly/watch.jpg
to:
%center% http://www-sop.inria.fr/members/Alexis.Joly/watch.jpg
Changed lines 52-53 from:
%center% http://www-rocq.inria.fr/~ajoly/dipoles.jpg
to:
%center% http://www-sop.inria.fr/members/Alexis.Joly/dipoles.jpg
Changed line 58 from:
%center% %width=260% http://www-rocq.inria.fr/~ajoly/images/femme_harris648.jpg http://www-rocq.inria.fr/~ajoly/images/femme_rares648.jpg
to:
%center% %width=260% http://www-sop.inria.fr/members/Alexis.Joly/images/femme_harris648.jpg http://www-sop.inria.fr/members/Alexis.Joly/images/femme_rares648.jpg
October 23, 2012, at 03:19 PM by 193.49.107.56 -
Changed line 22 from:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is somehow inspired by our previous works on space-filling curve based hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
to:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-sop.inria.fr/members/Alexis.Joly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is somehow inspired by our previous works on space-filling curve based hashing (see [[http://www-sop.inria.fr/members/Alexis.Joly/ajolyMIR05vf.pdf|this one]] or [[ajoly_TMA07.pdf|this one]] for example) and improves upon
Changed lines 31-32 from:
The last research paper somehow summarizing my work on this topic: [[http://www-rocq.inria.fr/~ajoly/ajolyTMA07.pdf| [TMA07] ]]
to:
The last research paper somehow summarizing my work on this topic: [[http://www-sop.inria.fr/members/Alexis.Joly/ajolyTMA07.pdf| [TMA07] ]]
Changed lines 41-42 from:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-rocq.inria.fr/~ajoly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-rocq.inria.fr/~ajoly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-rocq.inria.fr/~ajoly/dailymotion.jpeg]]||
to:
|| [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/imedia-visualcopyretrieval.mov|http://www-rocq.inria.fr/~ajoly/quicktime.jpeg]]  || [[ftp://ftp.inria.fr/scratch/multimedia/MUSCLE/visualcopyretrieval_final_384kbit%2016-9INRIA.rm|http://www-sop.inria.fr/members/Alexis.Joly/realplayer.jpeg]]  || [[http://www.dailymotion.com/relevance/search/imedia%2Bcopy/video/x4e0gp_the-visual-copy-retrieval_tech|http://www-rocq.inria.fr/~ajoly/dailymotion.jpeg]]||
October 23, 2012, at 03:18 PM by 193.49.107.56 -
Changed lines 7-8 from:
I did work on logo retrieval within [[http://vitalas.ercim.org/|VITALAS]] European project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]], which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
to:
I did work on logo retrieval within [[http://vitalas.ercim.org/|VITALAS]] European project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos/BelgaLogos.html|BelgaLogos]], which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
Changed lines 18-20 from:
%width=280% http://www-rocq.inria.fr/~ajoly/boosting.jpg http://www-rocq.inria.fr/~ajoly/boosting1.jpg

to:
%width=280% http://www-sop.inria.fr/members/Alexis.Joly/boosting.jpg http://www-sop.inria.fr/members/Alexis.Joly/boosting1.jpg

October 23, 2012, at 03:17 PM by 193.49.107.56 -
Changed lines 13-15 from:
Visit [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos.html|BelgaLogos]] home page to get the evaluation dataset

to:
Visit [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos/BelgaLogos.html|BelgaLogos]] home page to get the evaluation dataset

October 23, 2012, at 03:16 PM by 193.49.107.56 -
Changed lines 13-15 from:
Visit [[http://www-sop.inria.fr/members/Alexis.Joly/belga-logo.html|BelgaLogos]] home page to get the evaluation dataset

to:
Visit [[http://www-sop.inria.fr/members/Alexis.Joly/BelgaLogos.html|BelgaLogos]] home page to get the evaluation dataset

October 23, 2012, at 03:15 PM by 193.49.107.56 -
Changed lines 11-15 from:
%width=280% [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]

Visit [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]] home page to get the evaluation dataset

to:
%width=280% [[http://www-sop.inria.fr/members/Alexis.Joly/cocacola.swf | http://www-sop.inria.fr/members/Alexis.Joly/vitalas-coca.jpg ]]

Visit [[http://www-sop.inria.fr/members/Alexis.Joly/belga-logo.html|BelgaLogos]] home page to get the evaluation dataset

October 03, 2011, at 12:28 PM by 128.93.176.20 -
Changed lines 4-5 from:
%width=1700%[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
to:
%width=500%[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
October 03, 2011, at 12:28 PM by 128.93.176.20 -
Changed lines 4-5 from:
%width=170%[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
to:
%width=1700%[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
October 03, 2011, at 12:28 PM by 128.93.176.20 -
Changed lines 4-5 from:
[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
to:
%width=170%[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
October 03, 2011, at 12:27 PM by 128.93.176.20 -
Added lines 4-5:
[[http://deuxalex.free.fr/rmmh-cvpr-certified-IEEE-eXpress.pdf|http://deuxalex.free.fr/rmmh-fig.jpg]]
October 03, 2011, at 11:17 AM by 128.93.176.20 -
Added lines 1-3:
! Random Maximum Margin Hashing
RMMH is a new hashing function aimed at embedding high dimensional feature spaces in compact and indexable hash codes. Several data dependent hash functions have been proposed recently to closely fit data distribution and provide better selectivity than usual random projections such as LSH. However, improvements occur only for relatively small hash code sizes up to 64 or 128 bits. As discussed in the paper, this is mainly due to the lack of independence between the produced hash functions. RMMH attempts to solve this issue in any kernel space. Rather than boosting the collision probability of close points, our method focus on data scattering. By training purely random splits of the data, regardless the closeness of the training samples, it is indeed possible to generate consistently more independent hash functions. On the other side, the use of large margin classifiers allows to maintain good generalization performances. Experiments show that our new Random Maximum Margin Hashing scheme (RMMH) outperforms four state-of-the-art hashing methods, notably in kernel spaces.

August 10, 2010, at 02:26 PM by 128.93.24.22 -
Added line 24:
August 10, 2010, at 02:25 PM by 128.93.24.22 -
Changed lines 21-23 from:
%width=280% http://www-rocq.inria.fr/~ajoly/hashing.jpg

to:
%center% %width=280% http://www-rocq.inria.fr/~ajoly/hashing.jpg

August 10, 2010, at 02:25 PM by 128.93.24.22 -
Changed lines 21-23 from:


to:
%width=280% http://www-rocq.inria.fr/~ajoly/hashing.jpg

August 10, 2010, at 02:24 PM by 128.93.24.22 -
Changed line 17 from:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is inspired by our previous works on hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
to:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is somehow inspired by our previous works on space-filling curve based hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
August 10, 2010, at 02:22 PM by 128.93.24.22 -
Changed lines 20-21 from:

to:
We hope providing an open source version in the next few months...


August 10, 2010, at 02:21 PM by 128.93.24.22 -
Changed line 17 from:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. Multi-probe LSH methods are built on the well-known LSH technique, but they intelligently probe multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous works on hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
to:
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. It is built on the well-known LSH technique, but in order to reduce memory usage, it intelligently probes multiple buckets that are likely to contain query results in a hash table. This method is inspired by our previous works on hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
Changed lines 19-31 from:
LSH. Whereas these methods are based on likelihood criteria
that a given bucket contains query results, we define a more
reliable a posteriori model taking account some prior about
the queries and the searched objects. This prior knowledge
allows a better quality control of the search and a more
accurate selection of the most probable buckets. We implemented a nearest neighbors search based on this paradigm
and performed experiments on different real visual features
datasets. We show that our a posteriori scheme outperforms
other multi-probe LSH while offering a better quality control. Comparisons to the basic LSH technique show that our
method allows consistent improvements both in space and
time efficiency
.

to:
LSH.

August 10, 2010, at 02:18 PM by 128.93.24.22 -
Changed lines 16-17 from:
! Multi-probe locality sensitive hashing
We developed, jointly with Olivier Buisson at
INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous work on probabilistic similarity search structures and improves upon
to:
! Multidimensional Hashing
We developed, jointly with Olivier Buisson at
INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. Multi-probe LSH methods are built on the well-known LSH technique, but they intelligently probe multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous works on hashing (see [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf|this one]] or [[http://www-rocq.inria.fr/~ajoly/ajoly_TMA07.pdf|this one]] for example) and improves upon
August 10, 2010, at 02:11 PM by 128.93.24.22 -
Changed lines 56-57 from:
! Density-based selection of local features [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf| [MIR05] ]]
'''Keywords''': image retrieval, local features, discriminant, density estimation\\
to:
* Density-based selection of local features
Changed lines 58-59 from:
Local features are well-suited to content-based image retrieval because of their locality, their local uniqueness and their high information content [[#Miko05 | [4] ]]. However, as they are selected only according to the local information content in the image, there is no guaranty that they will be distinctive in a large set of images. A local feature corresponding to a high saliency in the image can be highly redundant in some specific databases, such as the TV news database stored at NII in which textual characters are extremely frequent. To overcome this issue, we propose [[#jolymir05| [5] ]] to select relevant local features directly according to their discrimination power in a specific set of images. By computing the density of the local features in a source database with a new fast non parametric density estimation technique, it is indeed possible to select quickly the most ''rare'' local features in a large set of images. Figure illustrates the difference between the 20 most salient points of an image and the 20 most rare points according to their density in a large image database. Currently, we are also looking at selecting local features according to their density in a single image or in a class of images, as done for textual features with TF/IDF techniques.
to:
As common local features are selected only according to the local information content in the image, there is no guaranty that they will be distinctive in a large set of images. To overcome this issue, I introduced a new method [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf| [MIR05] ]] to select relevant local features directly according to their discrimination power in a specific set of images. By computing the density of the local features in a source database with an efficient non parametric density estimation technique, it is indeed possible to select quickly the most ''rare'' local features in a large set of images.
Changed lines 61-67 from:
%center% '''left: 20 most salient points - right: 20 most rare points'''

[[#Dork03]][1] "Selection of Scale-Invariant Parts for Object Class Recognition", G. Dorko, C. Schmid, IEEE Int. Conf. on Computer Vision, vol. 1, pp. 634--640, 2003.\\
[[#low04]][2] "Distinctive image features from scale-invariant keypoints", D. Lowe, Int. Journal of Computer Vision, vol. 60, no. 2, pp. 91--110, 2004.\\
[[#joly05]][[http://www-rocq.inria.fr/~ajoly/ajolyICIP05.pdf| [3] ]] "Content-based video copy detection in large databases: A local fingerprints  statistical similarity search approach", A. Joly, C. Frélicot and O. Buisson, in Proceedings of the Int. Conf. on Image Processing, 2005.\\
[[#Miko05]][4] K. Mikolajczyk, C. Schmid. "A performance evaluation of local descriptors," cvpr, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 17, no. 10, pp. 1615--1630, 2005.\\
[[#jolymir05]][[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf| [5] ]] "Discriminant Local Features Selection using Efficient Density Estimation in a Large Database", A. Joly and O. Buisson, ACM Int. Workshop on Multimedia Information Retrieval, invited paper, 2005.
to:
%center% '''left: 20 most salient points - right: 20 most rare points'''
August 10, 2010, at 02:06 PM by 128.93.24.22 -
Changed lines 16-17 from:
! Multi-probe locality sensitive hashing [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08] ]]
We developed, jointly with Olivier Buisson at INA, a new similarity search structure dedicated to high dimensional features. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous work on probabilistic similarity search structures and improves upon
to:
! Multi-probe locality sensitive hashing
We developed, jointly with Olivier Buisson at INA, a new similarity search structure [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08AMP-LSH] ]] dedicated to high dimensional features. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous work on probabilistic similarity search structures and improves upon
August 10, 2010, at 02:05 PM by 128.93.24.22 -
Changed lines 13-14 from:
%width=280% http://www-rocq.inria.fr/~ajoly/boosting.jpg
to:
%width=280% http://www-rocq.inria.fr/~ajoly/boosting.jpg http://www-rocq.inria.fr/~ajoly/boosting1.jpg

August 10, 2010, at 02:04 PM by 128.93.24.22 -
Changed lines 13-14 from:
http://www-rocq.inria.fr/~ajoly/boosting.jpg
to:
%width=280% http://www-rocq.inria.fr/~ajoly/boosting.jpg
August 10, 2010, at 02:04 PM by 128.93.24.22 -
Changed lines 13-14 from:
to:
http://www-rocq.inria.fr/~ajoly/boosting.jpg
August 10, 2010, at 02:01 PM by 128.93.24.22 -
Changed lines 11-13 from:
! Interactive objects retrieval with efficient boosting [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM09] ]]
I developed jointly with my PhD student
[[http://www-roc.inria.fr/imedia/minicv-litayem.html|Saloua Litayem]]
to:
! Interactive objects retrieval with efficient boosting
We developed jointly with my Phd student [[http://www-roc.inria.fr/imedia/minicv-litayem.html|Saloua Litayem]] an efficient boosting method [[http://portal.acm.org/ft_gateway.cfm?id=1631352&type=pdf&coll=GUIDE&dl=GUIDE&CFID=100068686&CFTOKEN=80220619 | [ACM09boosting] ]] to predict local feature based trained classifiers in sublinear time. This technique allows online relevance feedback or active learning on image regions.
August 10, 2010, at 01:58 PM by 128.93.24.22 -
Changed lines 4-5 from:
Take a look the the flash demo:
to:
Take a look to the flash demo:
Changed lines 8-13 from:



NEW!!
Visit [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]] home page

to:
Visit [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]] home page to get the evaluation dataset

August 10, 2010, at 01:56 PM by 128.93.24.22 -
Changed lines 4-9 from:
Take a look the the video demo:
%width=180% [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



to:
Take a look the the flash demo:
%width=280% [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



August 10, 2010, at 01:56 PM by 128.93.24.22 -
Changed lines 5-9 from:
%width=120% [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



to:
%width=180% [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



August 10, 2010, at 01:51 PM by 128.93.24.22 -
Changed lines 4-8 from:
Look at the video demo [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



to:
Take a look the the video demo:
%width=120%
[[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



August 10, 2010, at 01:51 PM by 128.93.24.22 -
Changed lines 3-6 from:
 


to:

Look at the video demo [[http://www-rocq.inria.fr/~ajoly/cocacola.swf | http://www-rocq.inria.fr/~ajoly/vitalas-coca.jpg ]]



August 10, 2010, at 01:49 PM by 128.93.24.22 -
Changed line 2 from:
I did work on logo retrieval within [[http://vitalas.ercim.org/|VITALAS]] European project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called BelgaLogos, which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
to:
I did work on logo retrieval within [[http://vitalas.ercim.org/|VITALAS]] European project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]], which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
August 10, 2010, at 01:48 PM by 128.93.24.22 -
Changed line 2 from:
I did work on logo retrieval within VITALAS project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called BelgaLogos, which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
to:
I did work on logo retrieval within [[http://vitalas.ercim.org/|VITALAS]] European project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called BelgaLogos, which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
August 10, 2010, at 01:47 PM by 128.93.24.22 -
Changed lines 1-4 from:
! Logo retrieval with a contrario visual query expansion [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09] ]]


to:
! Logo retrieval with a contrario visual query expansion
I did work on logo retrieval within VITALAS project as an application of large scale local features matching. I introduced a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09logo] ]]. I also created a new challenging dataset, called BelgaLogos, which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios.
 



Changed lines 11-12 from:

to:
I developed jointly with my PhD student [[http://www-roc.inria.fr/imedia/minicv-litayem.html|Saloua Litayem]]
August 10, 2010, at 01:42 PM by 128.93.24.22 -
Changed lines 1-5 from:

! Logo retrieval with a contrario visual query expansion [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM09] ]]


to:
! Logo retrieval with a contrario visual query expansion [[http://portal.acm.org/ft_gateway.cfm?id=1631361&type=pdf&coll=GUIDE&dl=GUIDE&CFID=97284134&CFTOKEN=72972342 | [ACM09] ]]


August 06, 2010, at 04:15 PM by 128.93.24.22 -
Changed lines 1-2 from:
%rfloat% http://www-rocq.inria.fr/~ajoly/arbre.jpeg
to:
Changed lines 7-9 from:
NEW!! Visit [http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos] home page

to:
NEW!! Visit [[http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos]] home page

Changed lines 7-9 from:
NEW!! Go to BelgaLogos home page

to:
NEW!! Visit [http://www-roc.inria.fr/imedia/belga-logo.html|BelgaLogos] home page

Added lines 1-2:
%rfloat% http://www-rocq.inria.fr/~ajoly/arbre.jpeg
Added line 6:
Added lines 32-33:
My [+[[Main.MyPhdResearchActivitiesAtINA|My Phd at INA]]+] is hopefully also still of interest ;-)
Changed lines 41-43 from:
            

to:
Added lines 53-65:
! Density-based selection of local features [[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf| [MIR05] ]]
'''Keywords''': image retrieval, local features, discriminant, density estimation\\
This work started in collaboration with the NII (National Institute of Japan) within the scope of my visit in Tokyo (july 2005).
Local features are well-suited to content-based image retrieval because of their locality, their local uniqueness and their high information content [[#Miko05 | [4] ]]. However, as they are selected only according to the local information content in the image, there is no guaranty that they will be distinctive in a large set of images. A local feature corresponding to a high saliency in the image can be highly redundant in some specific databases, such as the TV news database stored at NII in which textual characters are extremely frequent. To overcome this issue, we propose [[#jolymir05| [5] ]] to select relevant local features directly according to their discrimination power in a specific set of images. By computing the density of the local features in a source database with a new fast non parametric density estimation technique, it is indeed possible to select quickly the most ''rare'' local features in a large set of images. Figure illustrates the difference between the 20 most salient points of an image and the 20 most rare points according to their density in a large image database. Currently, we are also looking at selecting local features according to their density in a single image or in a class of images, as done for textual features with TF/IDF techniques.

%center% %width=260% http://www-rocq.inria.fr/~ajoly/images/femme_harris648.jpg http://www-rocq.inria.fr/~ajoly/images/femme_rares648.jpg
%center% '''left: 20 most salient points - right: 20 most rare points'''

[[#Dork03]][1] "Selection of Scale-Invariant Parts for Object Class Recognition", G. Dorko, C. Schmid, IEEE Int. Conf. on Computer Vision, vol. 1, pp. 634--640, 2003.\\
[[#low04]][2] "Distinctive image features from scale-invariant keypoints", D. Lowe, Int. Journal of Computer Vision, vol. 60, no. 2, pp. 91--110, 2004.\\
[[#joly05]][[http://www-rocq.inria.fr/~ajoly/ajolyICIP05.pdf| [3] ]] "Content-based video copy detection in large databases: A local fingerprints  statistical similarity search approach", A. Joly, C. Frélicot and O. Buisson, in Proceedings of the Int. Conf. on Image Processing, 2005.\\
[[#Miko05]][4] K. Mikolajczyk, C. Schmid. "A performance evaluation of local descriptors," cvpr, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 17, no. 10, pp. 1615--1630, 2005.\\
[[#jolymir05]][[http://www-rocq.inria.fr/~ajoly/ajolyMIR05vf.pdf| [5] ]] "Discriminant Local Features Selection using Efficient Density Estimation in a Large Database", A. Joly and O. Buisson, ACM Int. Workshop on Multimedia Information Retrieval, invited paper, 2005.
Added lines 1-9:
! Logo retrieval with a contrario visual query expansion [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM09] ]]


NEW!! Go to BelgaLogos home page


! Interactive objects retrieval with efficient boosting [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM09] ]]

Deleted lines 49-52:


! Geometric consistency of local descriptors
Enhancing the performance of local features by using their geometric distribution or their relative positions is still a challenge. We have shown that in the copy detection scenario, the robust estimation of a global geometric transformation model after the search is widely profitable to improve the discrimination of the detection. However, for other scenarios, using the geometry remains a challenging task: Including the geometric distribution in the descriptor itself often leads to a lake of robustness during the search of similar local features whereas post-processing techniques are generally highly time consuming and thus limited to very small data sets. Moreover, in most of them, the geometric consistency is limited to rigid transformation models which do not allow to enforce the matching when two geometric distributions are dependent but not linearely linked. We are currently investigating the use of non parametric geometric consistency measurements such as mutual information and robust correlation ratio and we plane to combine them with some robust local geometric properties that could be included in the descriptor itself in order to limit the number of matches during the second step.
Added lines 1-16:
! Multi-probe locality sensitive hashing [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08] ]]
We developed, jointly with Olivier Buisson at INA, a new similarity search structure dedicated to high dimensional features. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous work on probabilistic similarity search structures and improves upon
recent theoretical work on multi-probe and query adaptive
LSH. Whereas these methods are based on likelihood criteria
that a given bucket contains query results, we define a more
reliable a posteriori model taking account some prior about
the queries and the searched objects. This prior knowledge
allows a better quality control of the search and a more
accurate selection of the most probable buckets. We implemented a nearest neighbors search based on this paradigm
and performed experiments on different real visual features
datasets. We show that our a posteriori scheme outperforms
other multi-probe LSH while offering a better quality control. Comparisons to the basic LSH technique show that our
method allows consistent improvements both in space and
time efficiency.

Changed lines 41-55 from:
! Multi-probe locality sensitive hashing [[http://www-rocq.inria.fr/~ajoly/pmh-revised.pdf | [ACM08] ]]
We developed, jointly with Olivier Buisson at INA, a new similarity search structure dedicated to high dimensional features. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by our previous work on probabilistic similarity search structures and improves upon
recent theoretical work on multi-probe and query adaptive
LSH. Whereas these methods are based on likelihood criteria
that a given bucket contains query results, we define a more
reliable a posteriori model taking account some prior about
the queries and the searched objects. This prior knowledge
allows a better quality control of the search and a more
accurate selection of the most probable buckets. We implemented a nearest neighbors search based on this paradigm
and performed experiments on different real visual features
datasets. We show that our a posteriori scheme outperforms
other multi-probe LSH while offering a better quality control. Comparisons to the basic LSH technique show that our
method allows consistent improvements both in space and
time efficiency.

to:

Research

Personal

* [[Personal/Photo Page | Photos]]

edit SideBar

Blix theme adapted by David Gilbert, powered by PmWiki