Abstract

Maintenance is of great importance for the safety and integrity of infrastructures. The expected optimal maintenance policy in this study should be able to minimize system maintenance cost while satisfying the system reliability requirements. Stochastic maintenance scheduling with an infinite horizon has not been investigated thoroughly in the literature. In this work, we formulate the maintenance optimization under uncertainties as a Markov Decision Process (MDP) problem and solve it using a modified Reinforcement Learning method. A Linear Programming-enhanced RollouT (LPRT) is proposed, which considers both constrained deterministic and stochastic maintenance scheduling with an infinite horizon. The novelty of the proposed approach is that it is suitable for online maintenance scheduling, which can include random unexpected maintenance performance and system degradation. The proposed method is demonstrated with numerical examples and compared with several existing methods. Results show that LPRT is able to determine the suitable optimal maintenance policy efficiently compared with existing methods with similar accuracy. Parametric studies are used to investigate the effect of uncertainty, subproblem size, and the number of stochastic stages on the final maintenance cost. Limitations and future work are given based on the proposed study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call