Cognitive Vision: video understanding, scene understanding, event recognition, behaviour analysis, activity monitoring, muti-sensor fusion, multimedia interpretation,
Computer Vision: video processing, mobile object perception and tracking, motion analysis, object detection, pattern recognition,
Artificial Intelligence: knowledge-based systems, spatio-temporal reasoning, machine learning, scenario modelling, context representation, uncertainty handling, knowledge acquisition, ontology,
Autonomous Systems: real-time systems, system evaluation, parameter tuning, system configuration, system design, 3D visualisation,
Miscellaneous: applied research.
François Brémond is a Research Director DR1 at INRIA Sophia Antipolis. He created the STARS team on the 1st of January 2012 and was previously the head of the PULSAR INRIA team in September 2009. He obtained his Master degree in 1992 from ENS Lyon. He has conducted research work in video understanding since 1993 both at Sophia-Antipolis and at USC (University of Southern California), LA. In 1997 he obtained his PhD degree from INRIA in video understanding and François Brémond pursued his research work as a post doctorate at USC on the interpretation of videos taken from UAV (Unmanned Airborne Vehicle) in DARPA project VSAM (Visual Surveillance and Activity Monitoring). In 2007 he obtained his HDR degree (Habilitation à Diriger des Recherches) from Nice University on Scene Understanding: perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition. He is a co-fonder of Keeneo, Ekinnox and Neosensys, three companies in intelligent video monitoring and business intelligence. He also co-founded the CoBTek team from Nice University on the 1st of January 2012 with P. Robert from Nice Hospital on the study of behavioral disorders for older adults suffering from dementia.
He designs and develops generic systems for dynamic scene interpretation. The targeted class of applications is the automatic interpretation of indoor and outdoor scenes observed by sensors and in particular by monocular colour cameras. These systems detect and track mobile objects, which can be either humans or vehicles, and recognize their behaviours. He is particularly interested in filling the gap between sensor information (pixel level) and behaviour recognition (semantic level). François Brémond is author or co-author of more than 200 scientific papers published in international journals or conferences in video understanding. He is reviewer for several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip JASP,...) and conferences (CVPR, ICCV, AVSS, VS, ICVS,…). He has (co-)supervised 18 PhD theses. He is an EC INFSO and French ANR Expert for reviewing project. He was teaching numerical classification at Nice University and video understanding in a High Engineer School at Master level.
He has participated to 12 European projects (Esprit, ITEA, FP6, FP7: PASSWORDS, ADVISOR, AVITRACK, SERKET, CARETAKER, CANTATA, COFRIEND, SERKET, VICOMO, VANAHEIM, SUPPORT, DEM@CARE), one DARPA project, 12 French projects (ANR, DGE, Prédit, TechnoVision, PACA, CG06,...), several industrial research contracts (Bull, Vigitec, SNCF, RATP, ALSTOM, STMicroElectronics, Thales, Keeneo, LinkCareServices, Neosensys...) and several international cooperations (USA, Taiwan, UK, Belgium) in video understanding. For instance, he has succeeded to recognize a large variety of scenarios in different applications: fighting, abandoned luggage, graffiti, fraud, crowd behavior in metro stations, in streets and onboard trains, aircraft arrival, aircraft refuelling, luggage loading/unloading on airport aprons, bank attack in bank agencies, access control in buildings, office behavior monitoring for ambient intelligence, older adult activity monitoring for homecare applications and wasp monitoring for biological application. He has also participated to a series of ARDA workshops to build an ontology of video events.
See other documents:
- CV, version in French (2015)
- Long CV, version in English (2018)
- A summary of my research activities (2010) and my research program for the coming years.
- A complete list of my publications (.doc) up to 2009 and a selection of my main paper abstract.
- Short CV, version 2017 (1 page)
Domaines de recherche:
- Analyse de séquences vidéo : détection et suivi d’objets mobiles à partir de réseaux de caméras fixes monoculaires, classification d’objet, détection de personne, détection de posture d’une personne.
- Interprétation de scènes multi-capteurs, fusion d’information provenant de vidéos et de capteurs environnementaux, gestion de l’incertitude.
- Reconnaissance d’activités : raisonnement spatio-temporel, reconnaissance d’événements complexes, de comportements, calcul des interactions d’une personne avec des équipements, représentation des connaissances, modélisation du contexte d’une scène, modélisation de scénarios utilisateurs, ontologie d’activités quotidiennes.
- Génération de systèmes : pilotage de programmes, évaluation (ou non) supervisée de système.
- Apprentissage : apprentissage des paramètres d’un système, apprentissage du contexte d’une scène, apprentissage des modèles de scénarios, des évolutions de profil comportemental.
- Activity Recognition using an ontology-base language (by Carlos Crispim)
- Supervised Action Recognition (Piotr Bilinski)
- Long term Activity Mining or Unsupervised Activity Discovery
- Software Tool for Video Processing and Performance Evaluation
- Human Re-identification through camera network (by Slawomir Bak)
- Online Adaptive Neural Classifier for Robust Tracking
- the GER'HOME project on Elderly Monitoring
- the CARETAKER project for Knowledge Discovery
- the RATP Project on metro access control
- St Micro-Electronics for Human Posture Recognition
- the ETISEO project for Video Surveillance Performance Evaluation
- the AVITRACK project for airport apron monitoring
- Trichogramma (animal) monitoring
- the CASSIOPEE project for Bank Agency Monitoring
- the European Project ADVISOR on metro monitoring
- the European Esprit HPCN PASSWORDS Project
- the SAMSIT project for onboard train monitoring
- Incremental Learning of Events in Video
For more information, see the complete list of 43 Research Projects