Bring it to me - Generation of Behavior-Relevant Scene Elements for Interactive Robot Scenarios
Humanoid robots are intended to act and interact in dynamically changing environments in the presence of humans. Current robotic systems are usually able to move in dynamically changing environments because of an inbuilt depth and obstacle sensing. However, for acting in their environment the internal representation of such systems is usually constructed by hand and known in advance. In contrast, this paper presents a system that dynamically constructs its internal scene representation using a model-based vision approach. This enables our system to approach and grasp objects in an previously unknown scene. We combine standard stereo with model-based image fitting techniques for a real-time estimation of the position and orientation of objects. The model-based image processing allows for an easy transfer to the internal, dynamic scene representation. For movement generation we use a task-level whole-body control approach that is coupled with a movement optimization scheme. Furthermore, we present a novel method that constrains the robot to keep certain objects in the FOV while moving. We demonstrate the successful interplay between model-based vision, dynamic scene representation, and movement generation by means of some interactive reaching and grasping tasks.
Nils Einecke Manuel M(u)hlig Jens Schm(u)dderich Michael Gienger
Honda Research Institute Europe Carl-Legien-Strasse 30 63073 Offenbach,Germany CoR-Lab Research Institute for Cognition and Robotics Universit(a)tsstr. 25 33615 Bielefeld,Germany
国际会议
2011 IEEE International Conference on Robotics and Automation(2011年IEEE世界机器人与自动化大会 ICRA 2011)
上海
英文
3415-3422
2011-05-09(万方平台首次上网日期,不代表论文的发表时间)