会议专题

Grid Resource Selection Based on Reinforcement Learning

Due to Grid computing is enabled by an infrastructure that allows users to locate computing resources and data dynamically during a computation, one of the main challenges in Grid computing is efficient selection of resources to the tasks submitted by users. In order to locate resources dynamically in Grid environment, a Grid application consults a broker or matchmaker agent that uses keywords and ontologies to specify grid services. Moreover, any successful selection mechanism should be highly distributed and robust to the dynamic property of Grid environment. However, we believe that keywords and ontologies cannot be defined or interpreted precisely enough to make matchmaking between agents sufficiently robust in a truly distributed, heterogeneous computing environment. To this end, we examine a simple algorithm for distributed resource selection that meets the above requirements. Our system consists of a large number of heterogeneous reinforcement learning agents that share common resources for their computational needs. There is no explicit communication between the agents: the only information that agents receive is reward, the expected response time of a job it submitted to a particular resource, which serves as a reinforcement signal for the agent. The experiments suggest that reinforcement learning can indeed be used to achieve load balanced resource selection in Grid.

grid multi-agent system reinforcement learning resource selection

Zhengli Zhai

Computer Engineering Institute, Qingdao Technological University Qingdao 266520, P.R.China

国际会议

The 2010 International Conference on Computer Application and System Modeling(2010计算机应用与系统建模国际会议 ICCASM 2010)

太原

英文

644-647

2010-10-22(万方平台首次上网日期,不代表论文的发表时间)