Showing posts with label Computer Vision. Show all posts
Showing posts with label Computer Vision. Show all posts

Monday, November 19, 2012

1

THE NEW TSUKUBA STEREO DATASET IS OUT!

Finally! After so much work we released the "New Tsukuba Stereo Dataset" at ICPR2012


This dataset contains 1800 stereo pairs with ground truth disparity maps, occlusion maps and discontinuity maps that will help to further develop the state of the art of stereo matching algorithms and evaluate its performance. It has been generated using photo-realistic computer graphics techniques and modeled after the original "head and lamp" stereo scene released by University of Tsukuba in 1997. 

The dataset is a 1 minute video sequence and also contains the 3D position and orientation of the camera on each frame, so it also can be used to develop and evaluate camera tracking methods.

It can be downloaded freely from the CVLAB website

Enjoy it!

Friday, January 6, 2012

81

3D RECONSTRUCTION WITH OPENCV AND POINT CLOUD LIBRARY

Hi everybody!

I bring you a sample of how to reconstruct a scene in 3D using OpenCV and Point Cloud Library (PCL) with a simple program and an example scene.

All we need is the left image of our stereo camera:

(You can implement your own cheap stereo webcam following this post: OpenCV Stereo Webcam)

The disparity map generated with your preferred Stereo Matching algorithm:

(For example you can use OpenCV's stereoBM algorithm: OpenCV StereoBM)

And the reprojection matrix (Q) obtained at calibration time:


<?xml version="1.0"?>
<opencv_storage>
<Q type_id="opencv-matrix">
  <rows>4</rows>
  <cols>4</cols>
  <dt>d</dt>
  <data>
    1. 0. 0. -2.9615028381347656e+02
    0. 1. 0. -2.3373317337036133e+02
    0. 0. 0. 5.6446880931501073e+02
    0. 0. -1.1340974198400260e-01 4.1658568844268817e+00
  </data></Q>
</opencv_storage>
(You can get the matrix Q for your own stereo camera following the instructions in this post: OpenCV Camera Calibration)

Now download the source code (I highly recommend to read the source code to understand what is going on, don't worry there is comments :P):


[NOTE]: You will need to have installed OpenCV Library (you can get it here) and Point Cloud Library (you can get it here). Also you will need CMake to generate the Makefiles.

Once you have downloaded the source code and installed the dependencies, just run:

tar xzvf OpenCVReprojectImageToPointCloud-1.0.tgz
cd OpenCVReprojectImageToPointCloud
cmake .
make
./OpenCVReprojectImageToPointCloud rgb-image.ppm disparity-image.pgm Q.xml

You should see something similar to the following video:


I hope you enjoy it!
[UPDATE: 04/02/2012] I have released a bug-fix. Thanks to Chris for pointing it out.

Monday, November 14, 2011

33

OPENCV: STEREO WEBCAM

Today, an easy one :) A small program in OpenCV to implement a cheap stereo webcam and visualize the left and right images. The only thing you need are two USB webcams plugged in your computer (better if both are same brand and model):



 Download the source code:


Compile it:

tar xzvf OpenCVStereoWebcam-1.0.tgz
cd OpenCVStereoWebcam
make
./OpenCVStereoWebcam

And there you go, your cheap USB stereo webcam.


Have fun!

Monday, October 10, 2011

7

STEREO VISION: 3D RECONSTRUCTION ERROR

Today I would like to publish the answer to another question in the comments of a previous post that might be worth its own post:

Hi Martin,

You gave such an informative article. Good Job Martin:-)

I'll explain steps that I performed in calculating distance of object.

1. Calibrated stereo camera with chessboard 8x6 cm with 2.8cm square size.

2. My calibration was success with rms error 0.206 and rectified images was good.

3. I found my interest point on left-right image. I tried to determine distance of object using method you specified. There is an error in distance measured from 3cm for closer object to 12cm or more for distant object.
Say, actual - 27cm ; measured - 24cm.
actual - 60cm ; measured - 48cm.


My queries are,
- why does there comes this much big varition is distance measurement?
- What may be reason/solution for this error?

Is there any mistake in my procedure or do i miss parameters?

Regards
Sathya Kumar

Dear Sathya,

First of all, thank you very much for your kind comment. Regarding your queries, I am afraid that what you describe is quite normal. The steps that you followed are correct, so I don't think that you did any mistake in your procedure or missed any parameters.

The thing is, that the relation between the disparity and the depth is non-linear.


Google ads, probably not very well related to the audience of this blog...


For this reason there is such a big variation in the distance measurement error, for close objects a small variation in disparity means a small variation on depth. But, for far objects a small variation in disparity means a big variation in depth.

So, there is no easy solution, it is unavoidable to get bigger error for distant objects, but you can try to mitigate the effects and reduce the error by getting a calibration as good as possible. And to do that you should take captures of the calibration pattern in as many different positions and orientations as possible, far, near, inclined, etc... Also increasing a bit the size of the chessboard could help you to get better accuracy for distant objects.

I hope this solved your doubts.

Best regards,
Martin

Sunday, September 4, 2011

6

OPENCV: PSEUDOCOLOR AND CHROMA DEPTH

Hi everybody!

A couple of days ago one of the readers of the blog asked this on the comments of a previous post:

Hi,

I'm Mike. Nice work! :)

I saw that some people have coloured depth maps and not just shades of grey/black/white. If I want to have coloured depth maps how can I achieve this?

Thanks,
Mike.

Well Mike, some time ago I had the same question and researching a bit I found two approaches. Pseudo Color and Chroma Depth. Let me answer your question with a blog post, so anybody else who could be interested can find it easily.

Basically, what we have is a function that takes as an input a gray scale value and returns the corresponding RGB.  The Pseudo Color approach uses a trigonometric function to assign the RGB color to each gray value. The next picture shows an example:


 As you can see the output is somewhat similar to the images of the luggage scanners in airports and train stations.

But for the purpose of depth visualization, maybe it is more convenient to use Chroma Depth. This method assigns red values to high depth values, magenta values to low depth values and a rainbow color to anything in between. Like in the next picture:



To make it easy, you can download  the simple Gtk application that I made to visualize the output of both methods and play around with some parameters. The source code should be self explanatory, so I will skip the details in this post.


Just execute "make" and then "./GrayToPseudocolor -gray /path/to/your/image.something"
I would like to thank Sarah Martull for letting me use her depth image for this post.

Saturday, August 27, 2011

21

STEREO VISION: WHY CAMERA CALIBRATION IS NEEDED?

As you might well know, if you have 2 different views from the same scene then you can estimate the 3D coordinates of any point in the scene by finding the position of that point in the left image and in the right image and then apply some trigonometry.

Let's assume that we have no previous information about the relation between the cameras. We can find a point of interest in the left image, but we don't know where that point of interest will appear in the right image. So, what do we do? We have no other option than scan the whole right image looking for our point of interest. 

Now I can hear you say: "But that would be soooo slooooow!!". Yep, you are absolutely right, that brute force approach is really slow. But, if we know the relation between both cameras then we can calculate something called epipolar lines

What is so special about these lines? Well, the magic about this lines is that a point in the left image will always have its correspondent point on the right image laying on its correspondent epipolar line! So, now instead of having to scan the whole right image to find our matching point, we only have to look over a single line :)

But wait!! There is more!! If the cameras are completely parallel then something very special happens... the epipolar lines become parallel. This means that the match of a point in the left image will appear in the exact same line on the right image! Isn't that awesome?

That is one of the main reasons to wanting to calibrate your stereo camera. Another good reason is that the lenses of the cameras introduces some distortion. That distortion makes straight lines in the real world appear curved in the image... and you don't like that, do you?

Let me show you a couple of image just to clarify.


The image above shows a couple of images taken with an uncalibrated stereo camera. The cameras are more or less parallel, but they are not perfectly aligned. Do you see the red point? See how it is not in the same line on the left and right images?

Now, almost all the methods used to calculate a dense disparity map rely on calibrated images, so if we try to use this images to calculate the dense disparity map we will get really poor results. You can check it on the next picture:


But now, if we apply the magic of calibration:


The image above has been rectified and undistorted (notice the black borders around the image, they are the result of removing the distortion and aligning the images so the epipolar lines are parallel and appear on the same row in bot images). See the green point? Do you see how it appears in the same row on both images?

Now, if we use this to calculate the dense disparity map:


There it is, much better results!!

To sum up, if you want to get the best out of stereo vision:

  1. Make sure that your cameras are as parallel as possible.
  2. Calibrate the stereo camera. See this post for instructions: http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html
  3. Tune the parameters of your stereo matching algorithm. See this post to get an example: http://blog.martinperis.com/2011/08/opencv-stereo-matching.html
  4. Have fun with it!
So you know, questions are welcome and any comments will be appreciated ;)

Sunday, August 21, 2011

71

OPENCV: STEREO MATCHING

In a previous post I talked about how to calibrate a stereo camera using OpenCV. Today, I would like to talk about the next step. Once your stereo camera is calibrated you can estimate the 3D position (relative to the camera) of any object given its position in the left and right image. For that, we need to calculate the stereo disparity for that object (stereo disparity = the difference in image location of an object seen by the left and right camera). If we want to know the 3D position of all points in a stereo pair of images, then we want to compute a dense disparity map. And that is what this post goes about.

A dense disparity map looks like this:



 I am not going to explain the details or the math behind it, I am more of a practical kind of guy. So let's start.
Basically OpenCV provides 2 methods to calculate a dense disparity map:
In this post I will focus on cvFindStereoCorrespondenceBM, this method is based on Konolige's Block Matching Algorithm. The OpenCV call looks like this:

void cvFindStereoCorrespondenceBM(const CvArr* left, const CvArr* right, CvArr* disparity, CvStereoBMState* state)

The structure CvStereoBMState contains all the parameters that are applicable to the algorithm. There is a bunch of them (pre-filtering, Sum of Absolute Difference windows size, disparity-related, post-filtering...). So, to make it easy, I implemented a small Gtk application that takes 2 images (left image and right image), calculates the disparity map using cvFindStereoCorrespondenceBM and allows you to play with the parameters.

The application is written in C and can be downloaded here: StereoBMTuner-1.0. The application depends on the libraries gtk+-2.0, gmodule-2.0 and opencv. Be sure to have them installed in your system.

Once the file is downloaded just execute:

tar xzvf stereoBMTunner-1.0.tgz
cd StereoBMTunner
make
./main

The last command will execute the application


As you can appreciate, the disparity map generated using the default parameters is hardly similar to the first image on this post. But, you can tune the parameters until you get a clearer disparity map. This video shows the use of the application:



Once the parameters are tuned, the disparity map is much better


It is still not perfect, but it is not so bad either.
Now, to use this application with your own couple of images the only thing you need to do is execute the application like this:

./main -left /path/to/my/image/left -right /path/to/my/image/right

And that's it. Please leave a comment if you found this useful, have any problems, questions, suggestions, impressions, etc...
25JKKNMXU6FE

Saturday, July 23, 2011

2

OPENCV 2 COMPUTER VISION APPLICATION PROGRAMMING COOKBOOK

Many people contact me through this blog to ask the following question: "Hey Martin! How could I get started in the world of Computer Vision?". Well, this book is the answer.



"OpenCV 2 Computer Vision Application Programming Cookbook" is more than just a Cookbook. The author, Robert Laganiere, makes no assumptions regarding the level of knowledge of the reader, so he starts from the basics and goes into more complex subjects progressively. It doesn't matter if you are a total beginner or an experienced user of OpenCV, all the explanations are complete and easy to follow. When the author considers that the audience could be eager to get more details about any of the topics covered in the book, he provides the appropriate bibliography

The code examples are programmed in C++, keeping in mind the performance and always trying to get the best out of Object Oriented programming paradigm. Actually, even an experienced programmer can learn many tips regarding OO programming best practices from this book. As I said: more than just a Cookbook.

You can see it for your self by reading the sample chapter that you can find here.

Of course,  the functionality and capabilities of OpenCV exceeds by far what can be covered in only one book but thanks to this cookbook you will have no problem in mastering OpenCV and being ready to unleash all the potential of the most used Computer Vision library


Tuesday, June 14, 2011

0

OPENCV: NEW COOKBOOK

Last week came to my attention a new book about OpenCV: OpenCV 2 Computer Vision Application Programming Cookbook. Stay tuned for the upcoming review on this blog!

Sunday, January 30, 2011

4

VISUAL SERVOING WITH MRDS AND EMGUCV

Today I would like to talk about a very interesting computer vision technique to control robots: Visual Servoing. Also called Vision-Based Robot Control, is a technique which uses the information gathered from a vision sensor (usually a camera) to control the motion of a robot.

A very good start point is the tutorial Visual Servo Control, Part I: Basic Approaches by F. Chaumette and S. Hutchinson published at IEEE Robotics and Automation Magazine, 13(4):82-90, December 2006.

The basic idea of a visual-based control scheme is to minimize the error between a set of measurements (usually a set of x,y coordinates of several image features) taken at the goal point of view and the set of measurements taken at the current point of view.

In this post I will not go into the details of the method, for that I refer you to the article cited above. But, instead I would like to show you a video, of the simulated robot I am working with, doing Visual Servo Control.



First the robot is activated and it learns how the target pattern "looks like" from the starting point of view (this will be the "goal point of view"). Then the target pattern is moved 6cm up. Obviously, at this position the point of view of the target that the camera sees is different than before, so the robot adapts itself to match the original point of view.

Later the target is moved 50cm towards the robot and again it moves itself so the goal point of view of the target is achieved. 

The demo has been developed using Microsoft Robotics Developer Studio and EmguCV

Sunday, January 9, 2011

204

OPENCV: STEREO CAMERA CALIBRATION

One of the basic tasks in Computer Stereo Vision is to calibrate the stereo camera in order to obtain the parameters that will allow you to calculate 3D information of the scene.

Now, I could tell you a lot of stuff about camera projection models, stereoscopy, lens distortion, etc... but there is a lot of information available about such topics out there. So, this post is for those who simply need to calibrate a stereo camera system and start calculating 3D stuff right away by using OpenCV

Anyway, I strongly recommend you to read the book: Learning OpenCV: Computer Vision with the OpenCV Library by Gary Bradski and Adrian Kaehler, Published by O'Reilly Media, October 3, 2008.

So... what do I need to calibrate my stereo camera? A chessboard like this:

Why a chessboard? Because its corners are very easy to find by using computer vision algorithms and its geometry is very simple. In order to find out the position of any corner you only need to know how many horizontal and vertical squares there are in the chessboard and the size of a square. The chessboard in the image is a 9x6 chessboard and if you print it in a paper of size A4 the size of the squares would be more or less 2.5cm.

OK, I've printed my chessboard and I have measured the real size of the squares, now what?
Now you just take multiple views of the chessboard in different positions and orientations with your stereo camera using your favorite software (maybe your own software, software provided by your camera manufacturer or some other free software like Coriander). The images should look like this:

(Yeah, that is me in Hawaiian shorts on a summer day :P)
The more variety of positions and orientations of the checkerboard in the images the better.
Great, you have taken a lot of shots of the chessboard in different positions, now create a text file with the paths to the images. For example:

images/left01.ppm
images/right01.ppm
images/left02.ppm
images/right02.ppm
images/left03.ppm
images/right03.ppm
images/left04.ppm
images/right04.ppm
images/left05.ppm
images/right05.ppm
images/left06.ppm
images/right06.ppm
images/left07.ppm
images/right07.ppm
images/left08.ppm
images/right08.ppm
images/left09.ppm
images/right09.ppm
images/left10.ppm
images/right10.ppm
images/left11.ppm
images/right11.ppm
images/left12.ppm
images/right12.ppm
images/left13.ppm
images/right13.ppm
images/left14.ppm
images/right14.ppm
images/left15.ppm
images/right15.ppm
images/left16.ppm
images/right16.ppm
images/left17.ppm
images/right17.ppm
images/left18.ppm
images/right18.ppm
images/left19.ppm
images/right19.ppm
images/left20.ppm
images/right20.ppm
Now download this software and compile it.



It is just one of the examples of the book mentioned above that I modified to accept some configuration parameters and store the results of the calibration. The usage of the software is as follows:

USAGE: ./stereo_calibrate imageList nx ny squareSize
imageList : Filename of the image list (string). Example : list.txt
nx : Number of horizontal squares (int > 0). Example : 9
ny : Number of vertical squares (int > 0). Example : 6
squareSize : Size of a square (float > 0). Example : 2.5

So, in this example the call to the program stereo_calibrate would be:

./stereo_calibrate list.txt 9 6 2.5

The program will start showing the detected chessboards, calculate the calibration parameters and store them in a bunch of xml files:

D1.xml D2.xml
M1.xml M2.xml
mx1.xml mx2.xml
my1.xml my2.xml
P1.xml P2.xml
R1.xml R2.xml
Q.xml

Congratulations! You have calibrated your stereo camera!! Now you can load this parameters into any other program that uses that stereo camera and play with them:

CvMat *Q = (CvMat *)cvLoad("Q.xml",NULL,NULL,NULL);
CvMat *mx1 = (CvMat *)cvLoad("mx1.xml",NULL,NULL,NULL);
CvMat *my1 = (CvMat *)cvLoad("my1.xml",NULL,NULL,NULL);
CvMat *mx2 = (CvMat *)cvLoad("mx2.xml",NULL,NULL,NULL);
CvMat *my2 = (CvMat *)cvLoad("my2.xml",NULL,NULL,NULL);

Each of the files contains a matrix, if you would like to know the meaning of each matrix, please refer to the book at the beginning of this post. Right now, the useful stuff is contained on the files mx1.xml, my1.xml, mx2.xml, my2.xml and Q.xml.

The files m*.xml are the distortion models of the cameras. So we will need these matrices to undo the distortion of the images caused by the lens. Using the cvRemap() function:

cvRemap(imgLeftOrig, imgLeftUndistorted, mx1, my1);
cvRemap(imgRightOrig, imgRightUndistoreted, mx2, my2);

The goal of all this is to be able to calculate the 3D position (in meters, cm, mm or whatever magnitude you chose) of a point given its position (in pixels) on the left image and its correspondent on the right image. We are almost there, but for that we need the matrix Q. Given the position an interest point in the left and right image, its 3D position can be calculated as follows:

d = pointRightImage.X - pointLeftImage.X;

X = pointLeftImage.X * Q[0, 0] + Q[0, 3];
Y = pointLeftImage.Y * Q[1, 1] + Q[1, 3];
Z = Q[2, 3];
W = d * Q[3, 2] + Q[3, 3];

X = X / W;
Y = Y / W;
Z = Z / W;

And thats pretty much it, now you know how to calculate 3D positions from 2 images using OpenCV. How to find interest points in one image and its correspondent on the other is an art that will be explained another day ;)

EDIT [2011/06/16]: Many people asked me about a good book to get started in OpenCV, so take a look at this:  OpenCV 2 Computer Vision Application Programming Cookbook

EDIT [2011/08/22]: What next? Check out the following post: OpenCV: Stereo Matching

EDIT [2011/08/27]: I changed the Makefile of the software for one much simpler and without hard-coded stuff. Also hosted the software in my own server (it was in megaupload, sorry about that), the link to the software in this post has been updated. Here it is, just in case: http://www.martinperis.com/stereocalibration/OpenCV-Stereo-Calibration.tgz 
EDIT [2012/01/05]: My hosting bandwidth was not enough to handle the traffic so I had to host the software on googlecode, the link to the software in this post has been updated. Here it is, just in case: http://opencv-stereo-calibration.googlecode.com/files/OpenCV-Stereo-Calibration-1.0.tgz

Wednesday, December 29, 2010

5

MICROSOFT ROBOTICS DEVELOPER STUDIO + OPENCV

In a previous post I explained how to add a simulated stereo camera to your simulated robot. OK! Having a stereo camera is fun... but, process the images and getting some interesting results is even funnier!!

Now, you could implement several Computer Vision techniques by yourself or take advantage of existing libraries.

For that matter, one of the most used libraries for computer vision is OpenCV. There is only a small problem... OpenCV is intended to be used under C/C++ and Microsoft Robotics Developer Studio is programmed in C#.

The solution to that problem is called EmguCVEmgu CV is a cross platform .Net wrapper to the Intel OpenCV image processing library. Allowing OpenCV functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc. The wrapper can be compiled in Mono and run on Linux / Mac OS X.

In order to use it in your MRDS projects you should:

1. Install EmguCV
2. Add a reference to Emgu.CV and Emgu.Util to your project in Visual Studio.


3. Add the needed "using" statements at the beginning of your source code



using Emgu.CV;
using Emgu.Util;




And that is it. Now you can use all the functionality of OpenCV in your robots (real or simulated).

Sunday, December 5, 2010

2

MRDS: SIMULATED STEREO CAMERA

During the last few weeks I have been working with Microsoft Robotics Developer Studio, it incorporates a very powerful simulation environment that enables you to visualize your robot and even program it even before actually building it.

By default a lot of simulated entities, basic for robotics, are provided: Simulated IR Distance Sensor, Simulated Sonar, Simulated Webcam, Simulated GPS Sensor, Simulated Laser Range Finder, etc...

But I was missing another basic entity often used by robots: A Stereo Camera. After reading some posts at MRDS forum I could not find a suitable solution (easy and fast to implement), so I just built my own solution.

If you want to simulate a stereo camera on Microsoft Robotics Developer Studio, you can follow this 3 steps:

Step 1- Add this class to your project:


public class StereoCameraEntity
    {
        public CameraEntity leftCam;
        public CameraEntity rightCam;
 
        public StereoCameraEntity()
        {
        }
 
        public StereoCameraEntity(
            String parentEntityName,// The name of the parent entity (used to generate a unique name) 
            int viewSizeX, //Image width  
            int viewSizeY, //Image height 
            float viewAngle,//View angle of the camera in degrees  
            Vector3 position,//Position of the center of the stereo camera 
            float baseLine,//Distance between the center of the cameras in centimeters 
            bool isRealTime)//Renders every frame 
        {
            //Initialize left camera 
            leftCam = new CameraEntity(
                viewSizeX,
                viewSizeY,
                (float)(viewAngle * Math.PI / 180.0));
            leftCam.State.Name = parentEntityName + "_LeftCam";
            leftCam.IsRealTimeCamera = isRealTime;
            leftCam.State.Pose.Position = new Vector3(
                position.X - (baseLine / 100.0f) / 2.0f ,
                position.Y,
                position.Z);
 
            //Initialize right camera 
            rightCam = new CameraEntity(
                viewSizeX,
                viewSizeY,
                (float)(viewAngle * Math.PI / 180.0));
            rightCam.State.Name = parentEntityName + "_RightCam";
            rightCam.IsRealTimeCamera = isRealTime;
            rightCam.State.Pose.Position = new Vector3(
                position.X + (baseLine / 100.0f) / 2.0f ,
                position.Y,
                position.Z);
        }
    }

This class is just a wrapper for CameraEntity. When calling the constructor for the new class that you just created you need to provide:

  • parentEntityName: A string containing the name of the parent entity (It will be used to create a unique name for the cameras)
  • viewSizeX: An integer for the horizontal resolution of the cameras (In pixels).
  • viewSizeY: An integer for the vertical resolution of the cameras (In pixels).
  • viewAngle: A float for the view angle of the cameras (in degrees).
  • position: A Vector3 containing the position of the stereo camera (in meters).
  • baseLine: The separation between the cameras (in centimeters).
  • isRealTime: True to render every frame.
Step 2- Inside the constructor of your entity create a new StereoCameraEntity and insert it as a child:

StereoCameraEntity stereoCam = new StereoCameraEntity(
        name,
        320,
        240,
        30.0f,
        new Vector3(
                xLocation,
                yLocation,
                zLocation),
        10.0f,
        true);
InsertEntityGlobal(stereoCam.leftCam);
InsertEntityGlobal(stereoCam.rightCam);

This creates a new stereo camera with a resolution of 320x240 pixels, a field of view of 30 degrees, a base line of 10 centimeters, located at the point (xLocation, yLocation, zLocation) and rendering every frame.

Step 3- Modify the manifest of your project to include this:


<servicerecordtype>
        <dssp:contract>http://schemas.microsoft.com/2006/09/simulatedwebcam.html</dssp:contract>
        <dssp:service>http://localhost/MyRobot/LeftCam</dssp:service>
        <dssp:partnerlist>
          <dssp:partner>
            <dssp:service>http://localhost/MyRobot_LeftCam</dssp:service>
            <dssp:name>simcommon:Entity</dssp:name>
          </dssp:partner>
        </dssp:partnerlist>
      </servicerecordtype>

     <servicerecordtype>
        <dssp:contract>http://schemas.microsoft.com/2006/09/simulatedwebcam.html</dssp:contract>
        <dssp:service>http://localhost/MyRobot/RightCam</dssp:service>
        <dssp:partnerlist>
          <dssp:partner>
            <dssp:service>http://localhost/MyRobot_RightCam</dssp:service>
            <dssp:name>simcommon:Entity</dssp:name>
          </dssp:partner>
        </dssp:partnerlist>
      </servicerecordtype>

And that's it. This is the result of the simulation of a modified Corobot with stereo camera: