会议专题

ONLINE ADAPTIVE CRITIC FLIGHT CONTROL USING APPROXIMATED PLANT DYNAMICS

A relatively new approach to adaptive flight control is the use of reinforcement learning methods such as the adaptive critic designs. Controllers that apply reinforcement learning methods learn by interaction with the environment and their ability to adapt themselves online makes them especially useful in adaptive and reconfigurable flight control systems. This paper is focused on two types of adaptive critic design, one is action dependent and the other uses an approximation of the plant dynamics. The goal of this paper is to gain insight into the theoretical and practical differences between these two controllers, when applied in an online environment with changing plant dynamics. To investigate the practical differences the controllers are implemented for a model of the General Dynamics F-16 and the characteristics of the controllers are investigated and compared to each other by conducting several experiments in two phases. First the controllers are trained offline to control the baseline F-16 model, next the dynamics of the F-16 model are changed online and the controllers will have to adapt to the new plant dynamics. The result from the offline experiments show that the controller with the approximated plant dynamics has a higher success ratio for learning to control the baseline F-16 model.The online experiments further show that this controller outperforms the action dependent controller in adapting to changed plant dynamics.

Reinforcement learning Adaptive flight control Adaptive critic designs

E.VAN KAMPEN Q.P.CHU J.A.MULDER

Control and Simulation Division, Faculty of Aerospace Engineering, Delft University of Technology, P.O.Box 5058, 3600 GB Delft, The Netherlands

国际会议

2006 International Conference on Machine Learning and Cybernetics(IEEE第五届机器学习与控制论坛)

大连

英文

256-261

2006-08-13(万方平台首次上网日期,不代表论文的发表时间)