Abstract

Scheduling maintenance tasks based on the deteriorating process has often been established on degradation models. However, the formulas of the degradation processes are usually unknown and hard to be determined for a system working in practices. In this study, we develop a model-based reinforcement learning approach for maintenance optimization. The developed approach determines maintenance actions for each degradation state at each inspection time over a finite planning horizon, supposing that the degradation formula is known or unknown. At each inspection time, the developed approach attempts to learn an optimal assessment value for each maintenance action to be performed at each degradation state. The assessment value quantifies the goodness of each state-action pair in terms of minimizing the accumulated maintenance costs over the planning horizon. To optimize the assessment values when a well-defined degradation formula is known, we customize a Q-learning method with model-based acceleration. When the degradation formula is unknown or hard to be determined, we develop a Dyna-Q method with maintenance-oriented improvements, in which an environment model capturing the degradation pattern under different maintenance actions is learned at first; Then, the assessment values are optimized while considering the stochastic behavior of the system degradation. The final maintenance policy is acquired by performing the maintenance actions associated with the highest assessment values. Experimental studies are presented to illustrate the applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call