Distributed, Heterogeneous, Multi-Agent Social Coordination via Reinforcement Learning
multi-agent systems are becoming more popular in a variety of problem domains that bene.t from increased parallelism, system robustness, and scalability, ranging from search and rescue to investment management. Multi-agent systems analysis studies how multiple agents coordinate with each other to maximize some team goal or individual best reward. Coordination achieved through learning provides a great advantage over modeling methods, especially when tasks become very complex and environments more dynamic. Because social primates such as chimpanzees are a highly successful multi-agent system that uses learning to adapt .exibly to changing social and environmental conditions, we are attempting to simulate their social cognition and behavior. The paper presents a foraging task to study how multiple agents can use reinforcement learning to coordinate as a group under social constraints, while also trying to maximize their own reward. Each distributed, heterogenous agent uses the WoLFPHC algorithm, and with no communication, the agents learn to select the best foraging patch based on the behavior of others through the Win or Learn Fast heuristic. The simulation results demonstrate that the agents can perform in a manner similar to the natural social behavior of chimpanzees, and show that we have a working model system for studying more complex chimpanzee social behavior in the future.
Dongqing Shi Michael Z. Sauter Jerald D. Kralik
Department of Psychological and Brain Sciences,Dartmouth College,Hanover,NH USA
国际会议
2009 IEEE International Conference on Robotics and Biomimetics(2009 IEEE 机器人与仿生技术国际会议 ROBIO 2009)
桂林
英文
653-658
2009-12-19(万方平台首次上网日期,不代表论文的发表时间)