Approximate Finite-horizon Optimal Control with Policy Iteration
In this paper,the policy iteration algorithm for the finite-horizon optimal control of continuous time systems is addressed.The finite-horizon optimal control with input constraints is formulated in the Hamilton-Jacobi-Bellman(HJB)equation by using a suitable nonquadratic function.The value function of the HJB equation is obtained by solving a sequence of cost functions satisfying the generalized HJB(GHJB)equations with policy iteration.The convergence of the policy iteration algorithm is proved and the admissibility of each iterative policy is discussed.Using the least squares method with neural networks(NN)approximation of the cost function,the approximate solution of the GHJB equation converges uniformly to that of the HJB equation.A numerical example is given to illustrate the result.
Finite-horizon Policy Iteration Input Constraints Neural Networks Approximation HJB Equation Least Squares
ZHAO Zhengen YANG Ying LI Hao LIU Dan
State Key Lab for Turbulence and Complex Systems,Department of Mechanics & Engineering Science,College of Engineering Peking University,Beijing 100871,P.R.China
国际会议
The 33th Chinese Control Conference第33届中国控制会议
南京
英文
8895-8900
2014-07-28(万方平台首次上网日期,不代表论文的发表时间)