Sunday, December 2, 2012

REEM, the humanoid robot


Today I had the chance to see REEM, the humanoid service robot developed by the spanish company PAL Robotics. The robot will be working as a Dynamic Information Point in a public are in CosmoCaixa (Barcelona) until December 2nd. This robot was designed to guide, inform and entertain people in public environments like museums, airports and special events. The robot is 1.65m high and is provided with cameras, ultrasonic sensors and laser range finders to localize itself within the environment and to avoid obstacles. The good-looking robot has definitely caught the children's attention, who were following the robot all the time, making REEM to demonstrate its obstacles avoiding capabilities :).

The REEM robot surrounded by children and visitors of the museum

Thursday, October 4, 2012

Photoconsistency Visual Odometry II

I have been working on new features and modifications for my Photoconsistency Visual Odometry project. In the last version I have implemented two more C++ classes to estimate the 3D rigid transformation between two RGBD frames. In these new classes, the residuals and jacobians are computed analytically, significantly improving performance. 


Top: resulting 3D map using the new Photoconsistency Visual Odometry implementation. Bottom: visualization of the estimated trajectory over the ground-truth using the CVPR tools.
The changes do not end up there; now the source code is organized in two different parts: the phovo library, which contains the Photoconsistency Visual Odometry algorithms; and applications that use this library. This way you can select to just build the phovo library (which only depends on OpenCV, Eigen and OpenMP), or configure the project to compile the provided applications too. Furthermore, I have implemented two new classes to access the Kinect sensor data for online operation.

If you want to give it a try, please download the latest code from http://code.google.com/p/photoconsistency-visual-odometry/ and build your own 3D maps with your Kinect sensor. 




Saturday, July 21, 2012

Photoconsistency Visual Odometry

It's has been a long time since my last entry, but I have been working hard on a new open source project called Photoconsistency-Visual-Odometry. With this project I wanted to develop an algorithm to estimate the 3DoF/6DoF motion of a Kinect sensor using the depth and intensity information from a RGBD dataset.

Reconstructed 3D global map of an indoor scene using the Photoconsistency Visual Odometry as a first pose approximation and GICP for pose refinement.
The project is licensed under the BSD license and the source code is available in the following SVN repository: http://code.google.com/p/photoconsistency-visual-odometry/

To estimate the rigid transformation, this implementation uses a coarse to fine optimization approach, trying to find the warping function that maximizes the photoconsistency between two consecutive RGBD frames at different image scales. That is, the optimizer starts computing a first pose approximation at a low resolution image scale, and then uses the estimated solution to initialize the optimization at a greater resolution image scale to refine the solution.

Conceptual flow diagram of the Photoconsistency rigid transformation estimation algorithm.
To estimate the visual odometry, the algorithm composes the estimated rigid transformation between each pair of consecutive RGBD frames to compute the global pose of the sensor.
Estimated trajectory using the implemented Photoconsistency Visual Odometry algorithm (blue) compared to the ground-truth (black).

Estimated trajectory compared to the ground-truth showing the trajectory error. Figure generated using the CVPR tools.
The provided solution has been implemented using the Ceres Solver auto-diff framework. This is a very powerful yet easy to use framework for error function optimization that uses dual numbers to compute the jacobians. The project also uses OpenCV for image processing and other open source libraries (PCL and MRPT) for the GICP implementation and dataset streaming respectively.

To finish this entry, take a look a one of the videos I have uploaded to Youtube. Hope you like this project and find it useful :).


Monday, June 18, 2012

Open Perception announced



Today the Open Perception non-profit foundation has been announced, which will support the development and adoption of BSD-licensed open source software for 2D and 3D processing of sensory data. They think 3D is the future, I believe it too!

http://www.openperception.org/news/

Tuesday, May 22, 2012

Leap Motion. The new reliable low cost depth camera

Leap Motion, a company in San Francisco has just announced a new low cost depth camera with an amazing accuracy (at least for close range applications). Will this sensor have similar accuracy for mid-range applications (i.e. for ranges between 1-5 meters)? If so, we should probably have a new sensor to take in mind for robotic perception and particularly for SLAM. 



For more info, take a look at their website in the following link:



Wednesday, March 7, 2012

A new engineer. Kinect6DSLAM source code and report

Yesterday I presented my Final Year Project about Kinect SLAM 6D and now I am officially an engineer. It has been a long time since my last entry, but I have been very busy writing the final report, preparing the presentation, etc. Those past few months have been very tough, but I have learnt a lot and I am very proud of that, so I want to thank the people that have supported me during this time.


I would like to express my gratitude to my advisors Dr. D. Javier González and Dr. D. José Luis Blanco, who have devoted all the needed time and effort to help me overcome the difficulties I have encountered and, without whose assistance, it would not been possible to carry out this project.

I am also grateful to all the authors, who have generously allowed me to use part of their material to illustrate several pages of my final report. I would also like to thank the researchers that have helped me during all my project, helping me to integrate their algorithms and sharing their ideas.

Last but not least, I feel grateful to my family, which has tireless supported me since the beginning. To my friends who have been with me when I needed. And finally I feel extremely grateful to Araceli, who has not only suffered most of my work hours, but has always encouraged me to go ahead in the hardest moments. Thank you very much.

[Update]

I have received a few emails asking me to release the source code of my Kinect6DSLAM project and Final Year Project report, so I have decided to publish both here so they can be downloaded from everyone.

A 3D map reconstructed in realtime with Kinect6DSLAM.

I have uploaded the source code to a repository of Github which can be accessed here:

https://github.com/MiguelAlgaba/KinectSLAM6D

Main page of the Doxygen documentation of the Kinect6DSLAM project.

The code comes with a (small) Doxygen documentation which briefly describes the project, how to install the software and how to use it. The documentation can be consulted in the "doc" directory: KinectSLAM6D/doc/html/index.html.

For the Final Year Project report I have decided to share a public link to the pdf document. The only problem is that the whole document is in spanish, but at least it is well illustrated and can be understandable, I think. The report can be downloaded from the following link:

http://dl.dropbox.com/u/1217405/AlgabaKinectSLAM2012.pdf

I have also decided to publish the slides of the Final Year Project presentation, which can be found in the following public link:

http://dl.dropbox.com/u/1217405/AlgabaKinectSLAM2012_slides.pdf