Real-Time Fusion of Multimodal Tracking Data and Generalization of Motion Patterns for Trajectory Prediction
A sensor-based model of a service robots environment is a prerequisite for interaction. Such a model should contain the positions of the robots interaction partners.Many reasonable applications require this knowledge in real-time. It could for example be used to realize efficient path planning for delivery tasks. Additionally to the actual positions of the partners it is important for the service robot to predict their possible future positions. In this paper we propose an extensible framework that combines different sensor modalities in a general real-time tracking system. Exemplarily, a tracking system is implemented that fuses tracking algorithms in laser range scans as well as in camera images by a particle filter. Furthermore, human trajectories are predicted by deducing them from learned motion patterns. The observed trajectories are generalized to trajectory patterns by a novel method which uses Self Organizing Maps. Those patterns are used to predict trajectories of the currently observed persons. Practical experiments show that multimodality increases the systems robustness to incorrect measurements of single sensors. It is also demonstrated that a Self Organizing Map is suitable for learning and generalizing trajectories. Convenient predictions of future trajectories are presented which are deduced from these generalizations.
Martin Weser Daniel Westhoff Markus Hüser Jianwei Zhang
Institute Technical Aspects of Multimodal Systems Dept. of Informatics, University of Hamburg Hamburg, Germany
国际会议
2006 IEEE International Conference on Information Acquisition
山东威海
英文
786-791
2006-08-20(万方平台首次上网日期,不代表论文的发表时间)