Abstract

Developing optimal rail renewal and maintenance planning that minimizes long-term costs and risks of failure is of paramount importance for railroad industry. However, intrinsic uncertainty, presence of constraints, and curse of dimensionality induce a challenging engineering problem. Despite the potential capabilities of Deep Reinforcement Learning (DRL), there is very limited research in the area of employing DRL methods to solve renewal and maintenance planning. Inspired by the recent advances in the area of DRL, a DRL-based approach is developed to optimize maintenance and renewal planning. This approach optimizes renewal and maintenance planning over a planning horizon by considering cost-effectiveness and risk reduction. We consider both predictive and condition-based maintenance tasks and incorporate time, resource, and related engineering constraints into the model to capture realistic features of the problem. Available historic inspection and maintenance data is used to simulate the rail environment and feed into DRL method. A Double Deep Q-Network (DDQN) is applied to overcome the uncertainty of the environment. In addition, prioritized replay memory is applied which improves the feedback from the improvement by giving high weight to important experiences of the agent. The proposed DDQN approach is applied to a Class I railroad network to demonstrate the applicability and efficiency the approach. Our analyses demonstrate that the proposed approach develops an optimal policy that not only reduces budget consumption but also improves the reliability and safety of the network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call