Abstract

Reinforcement learning (RL) has recently been adopted by infrastructure asset management (IAM) researchers for adding flexibility regarding uncertainties in preventive actions decision-making. However, this relatively recent line of research has not incorporated other sources of uncertainties, such as hazards apart from deterioration patterns, nor has such research considered managerial aspects of IAM, such as stakeholders’ utilities. This paper aims to provide a holistic framework that draws upon recent developments in IAM systems and microworlds, employs RL model training, and considers deterioration, hazards, and cost fluctuations as the main sources of uncertainties while also adopting managerial aspects into decision-making. Consistent with the existing practice of IAM, this framework brings flexibility in the face of uncertainties to the IAM decision-making process. Multi-agent RL models based on deep Q networks and actor-critic models are constructed and trained for taking intervention actions regarding elements of a real bridge in Indiana through its life cycle. Both models could lead to higher expected utilities and lower costs compared to the optimal maintenance, rehabilitation, and reconstruction (MRR) plans obtained by Monte Carlo simulation and heuristic optimization algorithms. The proposed framework can assist decision-making bodies and managers in the IAM domain with making updateable optimal and more realistic decisions based on the updated state of various complex uncertainties in a negligible amount of time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call