Abstract

Inspection and maintenance activities are effective ways to reveal and restore the health conditions of many industrial systems, respectively. Most extant works on inspection and maintenance optimization problems have assumed that systems operate under a time-invariant demand. Such a simplified assumption is oftentimes violated by a changeable market environment, seasonal factors, and even unexpected emergencies. In this article, with the aim of minimizing the expected total cost associated with inspections, maintenance, and unsupplied demand, a dynamic inspection and maintenance scheduling model is proposed for Multi-State Systems (MSSs) under a time-varying demand. Non-periodic inspections are performed on the components of an MSS and imperfect maintenance actions are dynamically scheduled based on the inspection results. By introducing the concept of decision epochs, the resulting inspection and maintenance scheduling problem is formulated as a Markov Decision Process (MDP). The Deep Reinforcement Learning (DRL) method with a Proximal Policy Optimization (PPO) algorithm is customized to cope with the “curse of dimensionality” of the resulting sequential decision problem. As an extra input feature for the agent, the category of decision epochs is formulated to improve the effectiveness of the customized DRL method. A six-component MSS, along with a multi-state coal transportation system, is given to demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call