会议专题

Visual Odometry Learning for Unmanned Aerial Vehicles

This paper addresses the problem of using visual information to estimate vehicle motion (a.k.a. visual odometry) from a machine learning perspective. The vast majority of current visual odometry algorithms are heavily based on geometry, using a calibrated camera model to recover relative translation (up to scale) and rotation by tracking image features over time. Our method eliminates the need for a parametric model by jointly learning how image structure and vehicle dynamics affect camera motion. This is achieved with a Gaussian Process extension, called Coupled GP, which is trained in a supervised manner to infer the underlying function mapping optical flow to relative translation and rotation. Matched image features parameters are used as inputs and linear and angular velocities are the outputs in our non-linear multi-task regression problem. We show here that it is possible, using a single uncalibrated camera and establishing a first-order temporal dependency between frames, to jointly estimate not only a full 6 DoF motion (along with a full covariance matrix) but also relative scale, a non-trivial problem in monocular configurations. Experiments were performed with imagery collected with an unmanned aerial vehicle (UAV) flying over a deserted area at speeds of 100- 120 km/h and altitudes of 80-100 m, a scenario that constitutes a challenge for traditional visual odometry estimators.

Vitor Guizilini Fabio Ramos

Australian Centre for Field Robotics,School of Information Technologies University of Sydney,Australia

国际会议

2011 IEEE International Conference on Robotics and Automation(2011年IEEE世界机器人与自动化大会 ICRA 2011)

上海

英文

6213-6220

2011-05-09(万方平台首次上网日期,不代表论文的发表时间)