Showing posts with label Simulation. Show all posts
Showing posts with label Simulation. Show all posts

Monday, November 19, 2012

1

THE NEW TSUKUBA STEREO DATASET IS OUT!

Finally! After so much work we released the "New Tsukuba Stereo Dataset" at ICPR2012


This dataset contains 1800 stereo pairs with ground truth disparity maps, occlusion maps and discontinuity maps that will help to further develop the state of the art of stereo matching algorithms and evaluate its performance. It has been generated using photo-realistic computer graphics techniques and modeled after the original "head and lamp" stereo scene released by University of Tsukuba in 1997. 

The dataset is a 1 minute video sequence and also contains the 3D position and orientation of the camera on each frame, so it also can be used to develop and evaluate camera tracking methods.

It can be downloaded freely from the CVLAB website

Enjoy it!

Thursday, July 14, 2011

0

KINECT SERVICES FOR MRDS AVAILABLE!

After the release of Kinect SDK, here is what many people was waiting for: Kinect Services for MRDS.



This package allows you to use your Kinect within Microsoft Robotics Developer Studio and moreover use it in a Simulated Environment! 

Good job MRDS team!

Sunday, February 6, 2011

0

MRDS: HOW TO ADD YOUR CUSTOM SIMULATED OBJECTS (AND DON'T DIE ON THE ATTEMPT)

As you might know, the simulator integrated in Microsoft Robotics Developer Studio can import 3D objects into your simulated world. The most common format used to import those objects is ".obj", it is a simple format that most 3D Design Programs can export to.

For the project that I am working right now it was needed to insert a realistic 3D model of a human face into the simulation.  The model of the 3D face was stored in 2 different sets of files:

Head 1.obj
Head 1.mtl
Head 1.bmp

Eyeballs 1.obj
Eyeballs 1.mtl
Eyeballs 1.bmp

The first set of files corresponds to the model of the head and the second set corresponds to the model of the eyes. The .obj files define the geometry of the objects, the .mtl define the properties of the materials of the objects and the .bmp is the texture.

So, first lets try to insert the head. The source code would look like this, add it to the definition of the simulated world:

//Insert the head 
SingleShapeEntity head =
 new SingleShapeEntity(
  new SphereShape(
   new SphereShapeProperties(
    0,
    new Pose(),
    0.1f)),
  new Vector3(0.0f, 0.5f, -2f));
head.State.Assets.Mesh = "Head 1.obj";
head.SphereShape.SphereState.Material = new MaterialProperties("sphereMaterial", 0.5f, 0.4f, 0.5f);
head.State.Name = "Head";

SimulationEngine.GlobalInstancePort.Insert(head);
 


Ok, now compile and execute the simulation:


Uhmmm... WHERE IS MY HEAD? Ok, don't panic! Let's try to find if it is out there. Select the "edit mode" on the simulator window:


Well, it looks like the head is inserted in the simulation, but for some esoteric reason we are not able to see it. Lets look around and try to find it. Use the mouse to rotate the camera in the simulated environment.


Did you see that?


There is some vertices there... wait a minute! Lets move the camera away about 20 meters.


There it is!! But it is a huge head!! No problem... just resize it. To do that, just add the proper line of code before inserting the object into the Simulation Engine:

head.MeshScale = new Vector3(0.01f, 0.01f, 0.01f);



Great, but, where is the texture? mmm thats odd. Why is it not showing the texture?

If you open the File "Head 1.obj" you will see something like this:

# 3ds Max Wavefront OBJ Exporter v0.97b - (c)2007 guruware
# File Created: 30.01.2011 20:28:48

mtllib Head 1.mtl

...

In the .obj file there is a reference to "Head 1.mtl", I'd bet having spaces in the file name would freak out MRDS , well... let's open that file in a text editor:

# 3ds Max Wavefront OBJ Exporter v0.97b - (c)2007 guruware
# File Created: 30.01.2011 20:28:48

newmtl defaultMat
 Ns 30.0000
 Ni 1.5000
 d 1.0000
 Tr 0.0000
 Tf 1.0000 1.0000 1.0000 
 illum 2
 Ka 0.5500 0.5500 0.5500
 Kd 0.5500 0.5500 0.5500
 Ks 0.0000 0.0000 0.0000
 Ke 0.0000 0.0000 0.0000
 map_Ka C:\Users\sama-sama\Desktop\Head 1\Head 1.bmp
 map_Kd C:\Users\sama-sama\Desktop\Head 1\Head 1.bmp

mmm Interesting, the file "Head 1.mtl" has an absolute path reference to the file "Head 1.bmp" (that, by the way, doesn't even exist in my computer, it is a path on the computer of the designer that modeled the head.Thanks a lot 3ds Max Studio). Let's try something: Remove the spaces in the file names and get rid of the absolute paths.

Now the files should be named as:

Head1.obj
Head1.mtl
Head1.bmp

Eyeballs1.obj
Eyeballs1.mtl
Eyeballs1.bmp

Now the content of Head1.obj should look like:

# 3ds Max Wavefront OBJ Exporter v0.97b - (c)2007 guruware
# File Created: 30.01.2011 20:28:48

mtllib Head1.mtl

...

And the content of Head1.mtl should be like:

# 3ds Max Wavefront OBJ Exporter v0.97b - (c)2007 guruware
# File Created: 30.01.2011 20:28:48

newmtl defaultMat
 Ns 30.0000
 Ni 1.5000
 d 1.0000
 Tr 0.0000
 Tf 1.0000 1.0000 1.0000 
 illum 2
 Ka 0.5500 0.5500 0.5500
 Kd 0.5500 0.5500 0.5500
 Ks 0.0000 0.0000 0.0000
 Ke 0.0000 0.0000 0.0000
 map_Ka Head1.bmp
 map_Kd Head1.bmp

Save the changes to all the files, and execute again:


Yeah! Much better!! Now we only have to do the same changes in the files corresponding to the eyes and insert them into the simulation, the final source code would be:

//Insert the head 
SingleShapeEntity head =
 new SingleShapeEntity(
  new SphereShape(
   new SphereShapeProperties(
    0,
    new Pose(),
    0.1f)),
  new Vector3(0.0f, 0.5f, -2f));
head.State.Assets.Mesh = "Head1.obj";
head.SphereShape.SphereState.Material = new MaterialProperties("sphereMaterial", 0.5f, 0.4f, 0.5f);
head.State.Name = "Head";
head.MeshScale = new Vector3(0.01f, 0.01f, 0.01f);
SimulationEngine.GlobalInstancePort.Insert(head);
 
SingleShapeEntity eyeballs =
 new SingleShapeEntity(
  new SphereShape(
   new SphereShapeProperties(
    0,
    new Pose(),
    0.01f)),
  new Vector3(0.0f, 0.5f, -2f));
eyeballs.State.Assets.Mesh = "eyeballs1.obj";
eyeballs.SphereShape.SphereState.Material = new MaterialProperties("sphereMaterial", 0.5f, 0.4f, 0.5f);
eyeballs.State.Name = "Eyeballs";
eyeballs.MeshScale = new Vector3(0.01f, 0.01f, 0.01f);
SimulationEngine.GlobalInstancePort.Insert(eyeballs);


Let me introduce you to Mary:




I would like to specially thank Sama-Sama Studio for providing several realistic head models for the experiments of my project.

Wednesday, December 29, 2010

5

MICROSOFT ROBOTICS DEVELOPER STUDIO + OPENCV

In a previous post I explained how to add a simulated stereo camera to your simulated robot. OK! Having a stereo camera is fun... but, process the images and getting some interesting results is even funnier!!

Now, you could implement several Computer Vision techniques by yourself or take advantage of existing libraries.

For that matter, one of the most used libraries for computer vision is OpenCV. There is only a small problem... OpenCV is intended to be used under C/C++ and Microsoft Robotics Developer Studio is programmed in C#.

The solution to that problem is called EmguCVEmgu CV is a cross platform .Net wrapper to the Intel OpenCV image processing library. Allowing OpenCV functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc. The wrapper can be compiled in Mono and run on Linux / Mac OS X.

In order to use it in your MRDS projects you should:

1. Install EmguCV
2. Add a reference to Emgu.CV and Emgu.Util to your project in Visual Studio.


3. Add the needed "using" statements at the beginning of your source code



using Emgu.CV;
using Emgu.Util;




And that is it. Now you can use all the functionality of OpenCV in your robots (real or simulated).

Sunday, December 26, 2010

1

MRDS: MOBILE BASE + ROBOT ARM

Today I would like to explain how to attach a robot arm to a mobile base using the simulation environment of Microsoft Robotics Developer Studio. It is a very simple task but it can get a bit tricky.

I have been following the examples and explanations on the book Professional Microsoft Robotics Developer Studio (you can find the source code of the examples at http://www.promrds.com ). In this post I will explain how to attach a Simulated LynxL6Arm to a simulated Corobot

First of all, you should open in Visual Studio the project where the Corobot Entity is defined and add a reference to SimulatedLynxL6Arm.Y2007.M07. 


Then edit the file Corobot.cs and locate the definition of the constructor public CorobotEntity(string name, Vector3 initialPos). At the end of the constructor insert an instance of the robotic arm like this:

//Insert the LynxL6Arm
LynxL6Arm.SimulatedLynxL6Arm l6arm = new LynxL6Arm.SimulatedLynxL6Arm(name + "_arm", new Vector3(0,0,0));
l6arm.State.Pose = new Pose(
        new Vector3(0, 0.23f, 0),
        Quaternion.FromAxisAngle(0, 1, 0, (float)(-Math.PI / 2)));


InsertEntityGlobal(l6arm);



Now, compile the project and run it. You should see something like this:




That was easy, wasn't it? But now, if you try to drive the Corobot around you will find that IT WONT MOVE! You will see the wheels spinning, but the robot doesn't move an inch from its position. WTF!

Now is when one becomes crazy trying to find out what is going on. You start reading MRDS's forums and find answers as crazy as "copy the source code of the definition of the robot arm inside the definition of the mobile base"... WHAT?? Come on! There must be a more elegant solution!!

And, indeed, there is. It took me one whole day to find out what was happening but, if you take a deep breath and focus, you eventually find the solution.

If you look to the picture above (the one with the Yellow Lynx L6 Arm on top of the Corobot) you will notice that it has a cylindric base. Well, that base is, by default, defined to be Kinematic, that means that it will not follow the physic laws and will stay like sticked to the ground. That is why the robot will not move!

Now, the solution to this problem is to remove the kinematic flag from the arm instance that we just created. So the correct source code looks like this:


//Insert the LynxL6Arm
LynxL6Arm.SimulatedLynxL6Arm l6arm = new LynxL6Arm.SimulatedLynxL6Arm(name + "_arm", new Vector3(0,0,0));
l6arm.State.Pose = new Pose(
        new Vector3(0, 0.23f, 0),
        Quaternion.FromAxisAngle(0, 1, 0, (float)(-Math.PI / 2)));

//Remove the Kinematic flag!! Otherwise the mobile base gets stuck!
l6arm.State.Flags = l6arm.State.Flags & ~(EntitySimulationModifiers.Kinematic);
InsertEntityGlobal(l6arm);


Sunday, December 5, 2010

2

MRDS: SIMULATED STEREO CAMERA

During the last few weeks I have been working with Microsoft Robotics Developer Studio, it incorporates a very powerful simulation environment that enables you to visualize your robot and even program it even before actually building it.

By default a lot of simulated entities, basic for robotics, are provided: Simulated IR Distance Sensor, Simulated Sonar, Simulated Webcam, Simulated GPS Sensor, Simulated Laser Range Finder, etc...

But I was missing another basic entity often used by robots: A Stereo Camera. After reading some posts at MRDS forum I could not find a suitable solution (easy and fast to implement), so I just built my own solution.

If you want to simulate a stereo camera on Microsoft Robotics Developer Studio, you can follow this 3 steps:

Step 1- Add this class to your project:


public class StereoCameraEntity
    {
        public CameraEntity leftCam;
        public CameraEntity rightCam;
 
        public StereoCameraEntity()
        {
        }
 
        public StereoCameraEntity(
            String parentEntityName,// The name of the parent entity (used to generate a unique name) 
            int viewSizeX, //Image width  
            int viewSizeY, //Image height 
            float viewAngle,//View angle of the camera in degrees  
            Vector3 position,//Position of the center of the stereo camera 
            float baseLine,//Distance between the center of the cameras in centimeters 
            bool isRealTime)//Renders every frame 
        {
            //Initialize left camera 
            leftCam = new CameraEntity(
                viewSizeX,
                viewSizeY,
                (float)(viewAngle * Math.PI / 180.0));
            leftCam.State.Name = parentEntityName + "_LeftCam";
            leftCam.IsRealTimeCamera = isRealTime;
            leftCam.State.Pose.Position = new Vector3(
                position.X - (baseLine / 100.0f) / 2.0f ,
                position.Y,
                position.Z);
 
            //Initialize right camera 
            rightCam = new CameraEntity(
                viewSizeX,
                viewSizeY,
                (float)(viewAngle * Math.PI / 180.0));
            rightCam.State.Name = parentEntityName + "_RightCam";
            rightCam.IsRealTimeCamera = isRealTime;
            rightCam.State.Pose.Position = new Vector3(
                position.X + (baseLine / 100.0f) / 2.0f ,
                position.Y,
                position.Z);
        }
    }

This class is just a wrapper for CameraEntity. When calling the constructor for the new class that you just created you need to provide:

  • parentEntityName: A string containing the name of the parent entity (It will be used to create a unique name for the cameras)
  • viewSizeX: An integer for the horizontal resolution of the cameras (In pixels).
  • viewSizeY: An integer for the vertical resolution of the cameras (In pixels).
  • viewAngle: A float for the view angle of the cameras (in degrees).
  • position: A Vector3 containing the position of the stereo camera (in meters).
  • baseLine: The separation between the cameras (in centimeters).
  • isRealTime: True to render every frame.
Step 2- Inside the constructor of your entity create a new StereoCameraEntity and insert it as a child:

StereoCameraEntity stereoCam = new StereoCameraEntity(
        name,
        320,
        240,
        30.0f,
        new Vector3(
                xLocation,
                yLocation,
                zLocation),
        10.0f,
        true);
InsertEntityGlobal(stereoCam.leftCam);
InsertEntityGlobal(stereoCam.rightCam);

This creates a new stereo camera with a resolution of 320x240 pixels, a field of view of 30 degrees, a base line of 10 centimeters, located at the point (xLocation, yLocation, zLocation) and rendering every frame.

Step 3- Modify the manifest of your project to include this:


<servicerecordtype>
        <dssp:contract>http://schemas.microsoft.com/2006/09/simulatedwebcam.html</dssp:contract>
        <dssp:service>http://localhost/MyRobot/LeftCam</dssp:service>
        <dssp:partnerlist>
          <dssp:partner>
            <dssp:service>http://localhost/MyRobot_LeftCam</dssp:service>
            <dssp:name>simcommon:Entity</dssp:name>
          </dssp:partner>
        </dssp:partnerlist>
      </servicerecordtype>

     <servicerecordtype>
        <dssp:contract>http://schemas.microsoft.com/2006/09/simulatedwebcam.html</dssp:contract>
        <dssp:service>http://localhost/MyRobot/RightCam</dssp:service>
        <dssp:partnerlist>
          <dssp:partner>
            <dssp:service>http://localhost/MyRobot_RightCam</dssp:service>
            <dssp:name>simcommon:Entity</dssp:name>
          </dssp:partner>
        </dssp:partnerlist>
      </servicerecordtype>

And that's it. This is the result of the simulation of a modified Corobot with stereo camera:

Friday, August 20, 2010

0

ITINERA: THE MAKING OF

Here is a video that shows a little bit about the making of this robot.

First it was designed by using the software Virtual Robot Simulator (http://robotica.isa.upv.es/virtualrobot). This software is similar in some ways to Microsoft's Robotic Developer Studio, the difference is that VRS was developed by the robotics group of UPV some years before MRDS and it is OpenSource.



After the digital design was approved, the actual assembly of the robot began... you should forgive me... I am a Computer Science Engineer not an Industrial or Mechanical Engineer, so the look was very "handmade" and the details not so polished... but, hey it works! It is not that bad :P