Visually Bootstrapped Generalized ICP
This paper reports a novel algorithm for bootstrapping the automatic registration of unstructured 3D point clouds collected using co-registered 3D lidar and omnidirectional camera imagery. Here, we exploit the co-registration of the 3D point cloud with the available camera imagery to associate high dimensional feature descriptors such as scale invariant feature transform (SIFT) or speeded up robust features (SURF) to the 3D points. We first establish putative point correspondence in the high dimensional feature space and then use these correspondences in a random sample consensus (RANSAC) framework to obtain an initial rigid body transformation that aligns the two scans. This initial transformation is then refined in a generalized iterative closest point (ICP) framework. The proposed method is completely data driven and does not require any initial guess on the transformation. We present results from a real world dataset collected by a vehicle equipped with a 3D laser scanner and an omnidirectional camera.
Gaurav Pandey Silvio Savarese James R. McBride Ryan M. Eustice
Department of Electrical Engineering & Computer Science University of Michigan,Ann Arbor,MI 48109 Research and Innovation Center Ford Motor Company,Dearborn,MI 48124 Department of Naval Architecture & Marine Engineering University of Michigan,Ann Arbor,MI 48109
国际会议
2011 IEEE International Conference on Robotics and Automation(2011年IEEE世界机器人与自动化大会 ICRA 2011)
上海
英文
2660-2667
2011-05-09(万方平台首次上网日期,不代表论文的发表时间)