Today I would like to talk about a very interesting computer vision technique to control robots: Visual Servoing. Also called Vision-Based Robot Control, is a technique which uses the information gathered from a vision sensor (usually a camera) to control the motion of a robot.
A very good start point is the tutorial Visual Servo Control, Part I: Basic Approaches by F. Chaumette and S. Hutchinson published at IEEE Robotics and Automation Magazine, 13(4):82-90, December 2006.
The basic idea of a visual-based control scheme is to minimize the error between a set of measurements (usually a set of x,y coordinates of several image features) taken at the goal point of view and the set of measurements taken at the current point of view.
In this post I will not go into the details of the method, for that I refer you to the article cited above. But, instead I would like to show you a video, of the simulated robot I am working with, doing Visual Servo Control.
First the robot is activated and it learns how the target pattern "looks like" from the starting point of view (this will be the "goal point of view"). Then the target pattern is moved 6cm up. Obviously, at this position the point of view of the target that the camera sees is different than before, so the robot adapts itself to match the original point of view.
Later the target is moved 50cm towards the robot and again it moves itself so the goal point of view of the target is achieved.
The demo has been developed using Microsoft Robotics Developer Studio and EmguCV
4 comments:
No puedo ver el vídeo, cacho friki!
Vale, era el ABP.
Ainssss Alma de Pollo! jejeje
Hello,
could i have details?i have a project about it.
I need to know the key points about virtual calibration
Post a Comment