A Unified Approach to Model-based and Model-free Visual Servoing

Standard vision-based control techniques can be classified into two groups: model-based and model-free visual servoing. Model-based visual servoing is used when a 3D model of the observed object is available. Using both the model and measured image features, one can estimate the pose of the camera with respect to the object frame. Thus, a robot, with a camera mounted on the end-effector, can be driven to any desired position. Obviously, if the 3D structure of the environment is completely unknown model-based visual servoing can not be used. In that case, robot positioning can still be achieved using a teaching-by-showing approach. This model-free technique, completely different from the previous one, needs a preliminary learning step during which a reference image of the scene is stored. After the camera or the object have been moved, the robot can be driven back to the reference position by visual servoing. When the current image observed by the camera is identical to the reference image the robot is back to the reference position. Both approaches are useful but, depending on the "a priori" knowledge we have of the scene, we must switch between them. The objective of this paper is to propose an unified approach to vision-based control which can be used with a zooming camera whether the model of the object is known or not. The key idea of the unified approach is to build a reference in a projective space which can be computed if the model is known or if an image of the object is available. Thus, only one low level visual servoing technique must be implemented at once. The strength of our approach is to keep the advantages of model-based and model-free methods and, at the same time, to avoid some of their drawbacks.

Back