Multi-agent Differential Graphical Games
Multi-agent systems arise in several domains of engineering and can be used to solve problems which are difficult for an individual agent to solve.Strategies for team decision problems,including optimal control,N-player games (H-infinity control,non-zero sum),and so on are normally solved for off-line by solving associated matrix equations such as the coupled Riccati equations or coupled Hamilton-Jacobi equations.However,using that approach players cannot change their objectives online in real time without calling for a completely new off-line solution for the new strategies.Therefore,in this paper are brought together cooperative control,reinforcement learning,and game theory to present a multi-agent formulation for online solution of team games.The notion of graphical games is developed for dynamical systems,where the dynamics and performance indices for each node depend only on local neighbor information.It is shown that standard definitions for Nash equilibrium are not sufficient for graphical games and a new definition of Interactive Nash Equilibrium is given..We give cooperative policy iteration algorithm for graphical games that converges to the best response when the neighbors of each agent do not update their policies,and to the cooperative Nash equilibrium when all agents update their policies simultaneously.This is used to develop methods for online adaptive learning solutions of graphical games in real time.
Kyriakos G.Vamvoudakis F.L.Lewis
Automation and Robotics Research Institute,University of Texas at Arlington,Fort Worth,USA
国际会议
The 30th Chinese Control Conference(第三十届中国控制会议)
烟台
英文
1-8
2011-07-01(万方平台首次上网日期,不代表论文的发表时间)