Robot localization in urban environments using omnidirectional vision sensors and partial heterogeneous apriori knowledge
This paper addresses the problem of long term mobile robot localization in large urban environments using a partial apriori knowledge made by different kind of images. Typically, GPS is the preferred sensor for outdoor operation. However, using GPS-only localization methods leads to signi.cant performance degradation in urban areas where tall nearby structures obstruct the view of the satellites. In our work, we use omnidirectional vision-based sensors to complement GPS and odometry and provide accurate localization.We also present some novel Monte Carlo Localization optimizations and we introduce the concept of online knowledge acquisition and integration presenting a framework able to perform long term robot localization in real environments. The vision system identi.es prominent features in the scene and matches them with a database of georeferenced features already known (with a partial coverage of the environment and using both directional and omnidirectional images and with different resolutions) or learned and integrated during the localization process (omnidirectional images only). Results of successful robot localization in the old town of Fermo are presented. The whole architecture behaves well also in long term experiments, showing a suitable and good system for real life robot applications with a particular focus on the integration of different knowledge sources.
Emanuele Frontoni Andrea Ascani Adriano Mancini Primo Zingaretti
Dipartimento of Ingegneria Informatica,Gestionale e dell’Automazione - DIIGA Università Politecnica Dipartimento of Ingegneria Informatica, Gestionale e dell’Automazione - DIIGA Università Politecnica
国际会议
青岛
英文
428-433
2010-07-15(万方平台首次上网日期,不代表论文的发表时间)