Vision-based Autonomous Land Vehicle Branch Planning
Autonomous Land Vehicle (ALV) is a mobile robot that capable of moving in the real world outdoors without peoples interruption. It is a Intelligent device containing environment perception, dynamic decision and plan, behavior control and execute etc.This paper will introduce the architecture of ALV system and its navigation briefly, and mainly focus on the planning algorithm of the ALV when in the branch point.ALV can autonomously percept and understand the environment, plan its own path, control its moving, complete pre-concerted tasks, move to objects, and it has the ability to self-learning and auto-adaptive. The research field of ALV refers to Image real-time Processing, Compute Vision, Sensor Technology,Artificial Intelligence, Automatic Controlling, Parallel Processing, Satellite Navigation and so on. ALV includes the newest research results of Information Science and Artificial Intelligence. Now it is applied in many fields including civilian and military etc.Civilian applications include smart automobiles, automated delivery system,autonomous farming, and autonomous mining. Military applications include battlefield reconnaissance, troop re-supply, and medical evacuation of injured personnel. (ALV promise many ultimate applications in tasks that are dangerous,difficult, or error-prone when performed by humans.The architecture of ALV mainly includes sensor system, position system, data fusion, path planning, vehicle control and some special modules. Sensor is used to percept the environment around, usually including CCD camera, Laser Radar,Ultrasonic, and so on, in addition, it might have sensor data processing model,and multi-sensor data fusion model; Position System is used to confirm the robots position and gesture, mainly includes Inertial Navigation System (INS)and Global Position System (GPS), recently the research of using nature scene to position robots has make a certain progress; Data Fusion is used to fuse the information provided by different sensors, the sensor information mainly includes road edge information, obstacle information from radar, depth map from stereo vision, and position information from INS/GPS; Vehicle Control is used to receive the path data from Plan module, and control the vehicle to move;Special Module is designed to do some given tasks, for example,Reconnaissance.Based on environment, ALV may have two kinds of situations: Road-tracing and cross-country. Under road-tracing situation, we capture images by a CCD camera, and some edge detection methods are used to detect the road edges.After calibration, we can get the road edges under the vehicle coordination.Laser radar has very good performance in road environment, so we use laser radar to detect obstacles, we give the both road edges and obstacles information to the plan module to plan a path, trace the road and avoid obstacles. Under cross-country situation, several key points are pre-provided, using GPS to position the ALV When the ALV is moving, we give destination point for its navigation. Stereo Vision can get depth information of the environment, so we use Stereo Vision to detect obstacles.Path planning is one of the most important aspects in ALV research, and is one of the most difficult aspects, especially when the vehicle is on the branch points. Because the camera used to detect the road edge is fixed on the vehicle,the ALV could not get complete edge information, many researchers use three key points provided by global plan to fit a Bezier Curve describing the path without any road information and control the vehicle using GPS information got in real time, so there maybe many potential problems with it, for example, when the GPS is interrupted by electromagnetic wave.We proposed several images with different platform angles (0,10,20,30 and so on). From these images, we can find that with different platform angle, the camera has different field. The road edge information may greatly differ from each other. Some can find both two edges, some can just find one, and some has no one. So when the vehicle on branch point, we should turn the platform to make the camera seeing both road edges. But what angle should we turn the platform to? According to many experiments, we conclude a formula to calculate the angle. When the vehicle begins to enter the branch state, we can find the image captured without both good road edges information. So we need to turn the platform, from the last frame, we can predict the road width. Using the information of last frame and current frame, the angle should turn can be calculated. With the platform turning step by step, the vehicle reaches the branch point. Then we need to turn the platform back, at the end of the branch state, the platform should be turned to the original position, in order to see normal road edges. According to many experiments, using the method proposed above, we can overcome many shortages of the method just using GPS information, and get better results.This method is based on vision, it has great relations to the information of road edges provided by the images captured from camera. So we need to do more work to find better algorithms of road detection, to make the road edges information more reliable and stable.
ALV Vision-based Branch Planning Autonomous Navigation Platform Control
Haofeng Zhang Weiming Cheng Chunxia Zhao Zhenmin Tang Jianfeng Lu Huajun Liu
Computer Science Department, Nanjing University of Sci.& Tech., Nanjing, Jiangsu, China, 210094
国际会议
首届嵌入式软件与系统国际会议(Proceedings of the First International Conference on Embedded Software and System)
杭州
英文
559-560
2004-12-09(万方平台首次上网日期,不代表论文的发表时间)