Scenario and context specific visual robot behavior learning
The design of visual robotic behaviors constitutes a substantial challenge. It requires to draw meaningful relationships and constraints between the acquired visual perception and the geometry of the environment both empirically and programmatically. This contribution proposes a novel robot learning framework to classify and acquire scenario specific autonomous behaviors through demonstration. During demonstration, robocentric 3D range and omnidirectional images are recorded as training instances of typical robot navigation situations pertaining to different contexts in multiple indoor scenarios. A programming by demonstration approach generalizes the demonstrated trajectories to a general mapping between visual features extracted from the omnidirectional image onto a corresponding robot motion. The approach is able to distinguish among different traversing scenarios and further identifies the best matching context within the scenario to predict an appropriate robot motion. As a comparison to context matching, the behaviors are trained by means of an artificial neural network and its generalization ability is evaluated against the former. The experimental validation on the mobile robot indicates that the acquired visual behavior is robust and generalizes meaningful actions beyond the specific environments and scenarios presented during training.
Krishna Kumar Narayanan Luis Felipe Posada Frank Hoffmann Torsten Bertram
Institute of Control Theory and Systems Engineering,Technische Universit(a)t Dortmund,44227,Dortmund,Germany
国际会议
2011 IEEE International Conference on Robotics and Automation(2011年IEEE世界机器人与自动化大会 ICRA 2011)
上海
英文
1180-1185
2011-05-09(万方平台首次上网日期,不代表论文的发表时间)