|Reconstructed 3D global map of an indoor scene using the Photoconsistency Visual Odometry as a first pose approximation and GICP for pose refinement.|
To estimate the rigid transformation, this implementation uses a coarse to fine optimization approach, trying to find the warping function that maximizes the photoconsistency between two consecutive RGBD frames at different image scales. That is, the optimizer starts computing a first pose approximation at a low resolution image scale, and then uses the estimated solution to initialize the optimization at a greater resolution image scale to refine the solution.
|Conceptual flow diagram of the Photoconsistency rigid transformation estimation algorithm.|
|Estimated trajectory using the implemented Photoconsistency Visual Odometry algorithm (blue) compared to the ground-truth (black).|
|Estimated trajectory compared to the ground-truth showing the trajectory error. Figure generated using the CVPR tools.|
To finish this entry, take a look a one of the videos I have uploaded to Youtube. Hope you like this project and find it useful :).