Sunday, September 29, 2013

Structure Sensor by Occipital

When I first saw this project on Kickstarter just a few days ago, I suddenly knew this was exactly was I was looking for to keep learning about RGBD sensors and all the very cool stuff that we can do with them. Moreover, this product has been engineered from scratch to play together with mobile devices like the iPad or any USB device.



The Structure Sensor is "The world's first 3D sensor for mobile devices", it features a depth sensor able to work within a range starting at 40cm to 3.5m and an onboard battery that provides up to 4 hours of active use. Another very cool thing is that, unlike the IR structured light projector, this one also emits a uniform IR light that will allow us to capture the scene in infrared. The sensor was originally announced working amazingly well with the iPad and also with the new iPhone 5S, but Occipital says that they will also provide open source drivers for Android and even OSX, Linux and Windows.

With all those great features and the fact that was designed to be portable, I believe that it will be a huge success. I wanted to be one of the first developers/hackers to try this thing out, so I decided to back this project and have access to this new amazing piece of technology. I can't wait to February 2014, when they promised they will deliver this sensor to their Kickstarter backers!


Also, I'm glad to say that this blog reached 15000 visits today and wanted to thank you all for your interest and the suggestions you send me by email. I will keep updating it little by little! Happy coding and hacking!

Tuesday, September 24, 2013

Moving photoconsistency-visual-odometry to Github

It has been a really long time since I last updated this blog. Today I decided to start moving the photoconsistency-visual-odometry project from its current location in Google Code to Github. Of course I will not only move the code from one place to another, I have started from simplifying the compilation process a little bit and I will keep updating the project little by little. For now on you will find the new version of this project in the following link, hope you like it and find it useful!

https://github.com/MiguelAlgaba/photoconsistency-visual-odometry


Sunday, December 2, 2012

REEM, the humanoid robot


Today I had the chance to see REEM, the humanoid service robot developed by the spanish company PAL Robotics. The robot will be working as a Dynamic Information Point in a public are in CosmoCaixa (Barcelona) until December 2nd. This robot was designed to guide, inform and entertain people in public environments like museums, airports and special events. The robot is 1.65m high and is provided with cameras, ultrasonic sensors and laser range finders to localize itself within the environment and to avoid obstacles. The good-looking robot has definitely caught the children's attention, who were following the robot all the time, making REEM to demonstrate its obstacles avoiding capabilities :).

The REEM robot surrounded by children and visitors of the museum

Thursday, October 4, 2012

Photoconsistency Visual Odometry II

I have been working on new features and modifications for my Photoconsistency Visual Odometry project. In the last version I have implemented two more C++ classes to estimate the 3D rigid transformation between two RGBD frames. In these new classes, the residuals and jacobians are computed analytically, significantly improving performance. 


Top: resulting 3D map using the new Photoconsistency Visual Odometry implementation. Bottom: visualization of the estimated trajectory over the ground-truth using the CVPR tools.
The changes do not end up there; now the source code is organized in two different parts: the phovo library, which contains the Photoconsistency Visual Odometry algorithms; and applications that use this library. This way you can select to just build the phovo library (which only depends on OpenCV, Eigen and OpenMP), or configure the project to compile the provided applications too. Furthermore, I have implemented two new classes to access the Kinect sensor data for online operation.

If you want to give it a try, please download the latest code from http://code.google.com/p/photoconsistency-visual-odometry/ and build your own 3D maps with your Kinect sensor. 




Saturday, July 21, 2012

Photoconsistency Visual Odometry

It's has been a long time since my last entry, but I have been working hard on a new open source project called Photoconsistency-Visual-Odometry. With this project I wanted to develop an algorithm to estimate the 3DoF/6DoF motion of a Kinect sensor using the depth and intensity information from a RGBD dataset.

Reconstructed 3D global map of an indoor scene using the Photoconsistency Visual Odometry as a first pose approximation and GICP for pose refinement.
The project is licensed under the BSD license and the source code is available in the following SVN repository: http://code.google.com/p/photoconsistency-visual-odometry/

To estimate the rigid transformation, this implementation uses a coarse to fine optimization approach, trying to find the warping function that maximizes the photoconsistency between two consecutive RGBD frames at different image scales. That is, the optimizer starts computing a first pose approximation at a low resolution image scale, and then uses the estimated solution to initialize the optimization at a greater resolution image scale to refine the solution.

Conceptual flow diagram of the Photoconsistency rigid transformation estimation algorithm.
To estimate the visual odometry, the algorithm composes the estimated rigid transformation between each pair of consecutive RGBD frames to compute the global pose of the sensor.
Estimated trajectory using the implemented Photoconsistency Visual Odometry algorithm (blue) compared to the ground-truth (black).

Estimated trajectory compared to the ground-truth showing the trajectory error. Figure generated using the CVPR tools.
The provided solution has been implemented using the Ceres Solver auto-diff framework. This is a very powerful yet easy to use framework for error function optimization that uses dual numbers to compute the jacobians. The project also uses OpenCV for image processing and other open source libraries (PCL and MRPT) for the GICP implementation and dataset streaming respectively.

To finish this entry, take a look a one of the videos I have uploaded to Youtube. Hope you like this project and find it useful :).


Monday, June 18, 2012

Open Perception announced



Today the Open Perception non-profit foundation has been announced, which will support the development and adoption of BSD-licensed open source software for 2D and 3D processing of sensory data. They think 3D is the future, I believe it too!

http://www.openperception.org/news/

Tuesday, May 22, 2012

Leap Motion. The new reliable low cost depth camera

Leap Motion, a company in San Francisco has just announced a new low cost depth camera with an amazing accuracy (at least for close range applications). Will this sensor have similar accuracy for mid-range applications (i.e. for ranges between 1-5 meters)? If so, we should probably have a new sensor to take in mind for robotic perception and particularly for SLAM. 



For more info, take a look at their website in the following link: