Interactive Search by Visual Content  

 

Nozha Boujemaa

 

 INRIA Rocquencourt, Projet IMEDIA

 

Contact email : Nozha Boujemaa@inria.fr

 

 

Abstract

 

We present image retrieval research activities at the IMEDIA group that are integrated in our search engine IKONA based on client/server architecture. In generic image databases, where image content is heterogeneous and no ground truth is available or obvious, visual appearence is described by a combinatin of general image signature such as color, texture and shape. We have developped compact and efficient image signatures that relates spatial organisation of color. Examples include stock photography and the World Wide Web. The user should be assumed to be an average user (not an expert). In order to deal with generic databases, Ikona includes a relevance feedback technique which enables the user to refine their query by specifying over time a set of relevant and a set of non-relevant images. Relevance feedback interaction methods try to use the information the user supplies to the system in an attempt to "guess" what are his intentions, thus making it easier to find what he wants. It is an interactive way to minimize the semantic gap in such low-level search by content. In specific image databases, we consider the available ground truth and tune the models or range of parameters accordingly, maximising the system efficiency. We have developped specific signature for face detection and recognition and fingerprint identification. Region based queries are being integrated into IKONA. In this mode, the user can select a part of an image and the system will search images (or parts of images) that are visually similar to the selected part. This ineteraction allow to the user to precise to the system what part or particular object is interesting in the image. In this case, since the query is focused, the system response is enhanced with regards to the user target since the background image signature is not considered. We have developed segmentation based methods as well as point of interest methods to achive partial queries. While text indexing is ubiquitous, it is often limited, tedious and subjective for describing image content. Visual content image signatures are objective but has no semantic range. Combining both text and image features for indexing and retieval is very promising area of interest of IMEDIA team. We first work on a way to do keyword propagation based on visual similarity. For example, if an image database has been partially annotated with keywords, IKONA can suggest a number of keywords for a non annotated image and their relevance. Further research on keyword propagation, semantic concept search and hybrid text-image retrieval mode are being carried on. We will present application with generic photo-gallery, specific face database and criminal investigation department applications.