During my carrier, I have setup 4 research teams in collaboration with my colleagues:
My main research activities concerns Modeling, Control and Perception:
Robotic tasks can be defined by controling the interaction with the environment while controling the internal dynamics. When using multiple information and/or multiple sensors, a set of parallel interactions are created, which can be modelized by a virtual parallel robot called Hidden robot. By studying the proprerties of this hidden robot, we are able to analayze and study the control laws. We strongly believed that environment must be defined knowing the task to be done.
Intelligent and connected autonomous vehicles (or mobile robots) are the next revolution technology which will be part of the human environment. Robots must be not only automated, they must be also autonomous and safe in order to face and handle difficult situations. Evolving among humans requires to be not only safely, but also to be proactive in order to accomplished transportation tasks. Multi robots architecture allow to enhance perception and action for robotic systems.
Ominidirectional visual servoing using different features (lines, points, ...) can be used for large environment. Complex systems like parallel robots, humanoïd robots, multi arms robots, multi robots systems, UAVS, and combination of them are main targeted robots. Using adaptive control, predictive control and online learning, allows to anticipate and adapt the system to the evolution of the dynamic environment.
When sensing the environment, it is often required to use large field of view cameras, omnidirectional camera, even 360degres camera. Generic camera models can be proposed in order to preserve the main geometrical properties of the data. For instance, the unit shperical model has been demonstrated to be candidate for the fisheye camera model, and an enhanced version has been developed. New sensors technology requires always to setup new model, and propose generic model. We all dream for a generic multimodal model able to be used for different sets of sensors.
Artificial intelligence is present at the time in the way of defining and designing the tasks, in the way of representing the environment using multi-layers (metric, topological,semantic, social, ...), in the description of the knowledge associated with the task and its evolution through experience (learning by reenforcement, or deep learning), and in the cognitive mechanisms put in place to manage the short, medium and long term information. It is also present in the algorithms and exploration strategies, in algorithms collaborative perception and command for multi-robot systems, ...
|
AVSI : Active Vision and Sensor Integration |
Current research activity (since 2000) |
Current actions concern : |
History of research activity in LASMEA |
Some past actions have been done by Jozeph Alizon (CR CNRS/LASMEA). |
PAVA : Parallel Architecture for Vision Applications |
Current research activity (since 2000) |
No current action. |
History of research activity in LASMEA |
Past actions concern :
Others Parallel Architectures for Vision Applications have been developed in the LASMEA : PRIVE1, PRIVE2, TRANSVISION (T800, T9000, Dec Alpha),
Bewulf Ossian Machine (G4), ... For more details or advanced informations, please contact Jean-Pierre Derutin at LASMEA. |