A deep reinforcement learning paradigm for bridge maintenance policies
Civil infrastructures are degrading in performance since they were built due to a variety of deteriorating mechanisms,such as ageing,environmental erosion,natural or man-made hazards.Because the state of civil infrastructures has significant impact on a nations safety,peoples living equality,and economys healthy,the improving of structural condition of deteriorating infrastructure has been a main concern worldwide.Life-cycle management is widely considered as a powerful tool,aiming at maximizing the cost-effectiveness of maintenance actions.The decision of when and how to conduct the maintenance action is called the maintenance policy.MDPs are employed as the tool to make the maintenance policies,and have saved life-cycle costs during the past 3 or 4 decades.MDPs can be solved efficiently under the Reinforcement Learning(RL)framework,moreover,deep learning enables RL to scale to decision-making problems that were previously intractable.DRL has been viewed as an important component to construct general Artificial Intelligence systems,and has been applied to various engineering tasks involving decision-making or control.However,the effectiveness of DRL has not been realized by the structure maintenance community.Therefore,this study first briefly introduces the method of DRL,then proposes a DRL paradigm for the real,complex maintenance tasks for the system-element-level bridge maintenance policy.
Structure maintenance policy random deterioration model Markov Decision Process (MDP) Deep Reinforcement Learning (DRL) Deep Q-network (DQN)
Shiyin Wei Hui Li
Key Laboratory of Intelligent Disaster Prevention for Civil Infrastructure,Ministry of Industry and Information Technology,China;School of Civil Engineering,Harbin Institute of Technology,150090,Harbin,China
国际会议
The 7th World Conference on Structural Control and Monitoring(7WCSCM)(第七届结构控制与监测世界大会)
青岛
英文
2306-2311
2018-07-22(万方平台首次上网日期,不代表论文的发表时间)