Self-supervised Obstacle Detection for Humanoid Navigation Using Monocular Vision and Sparse Laser Data
In this paper, we present an approach to obstacle detection for collision-free, efficient humanoid robot navigation based on monocular images and sparse laser range data. To detect arbitrary obstacles in the surroundings of the robot, we analyze 3D data points obtained from a 2D laser range finder installed in the robot’s head. Relying only on this laser data, however, can be problematic. While walking, the floor close to the robot’s feet is not observable by the laser sensor, which inherently increases the risk of collisions, especially in nonstatic scenes. Furthermore, it is time-consuming to frequently stop walking and tilting the head to obtain reliable information about close obstacles. We therefore present a technique to train obstacle detectors for images obtained from a monocular camera also located in the robot’s head. The training is done online based on sparse laser data in a self-supervised fashion. Our approach projects the obstacles identified from the laser data into the camera image and learns a classifier that considers color and texture information. While the robot is walking, it then applies the learned classifiers to the images to decide which areas are traversable. As we illustrate in experiments with a real humanoid, our approach enables the robot to reliably avoid obstacles during navigation. Furthermore, the results show that our technique leads to significantly more efficient navigation compared to extracting obstacles solely based on 3D laser range data acquired while the robot is standing at certain intervals.
Daniel Maier Maren Bennewitz Cyrill Stachniss
Department of Computer Science,University of Freiburg,Germany
国际会议
2011 IEEE International Conference on Robotics and Automation(2011年IEEE世界机器人与自动化大会 ICRA 2011)
上海
英文
1263-1269
2011-05-09(万方平台首次上网日期,不代表论文的发表时间)