Abstract

Condition-based maintenance (CBM) policy can avoid premature or late maintenance and reduce system failures and maintenance costs. Most existing CBM studies cannot solve the dimensional disaster problem in multi-component complex systems. Only some studies consider the constraint of maintenance resources when searching for the optimal maintenance policy, which is hard to apply to practical maintenance. This paper studies the joint optimization of the CBM policy and spare components inventory for the multi-component system in large state and action spaces. We use Markov Decision Process to model it and propose an improved deep reinforcement learning algorithm based on the stochastic policy and actor-critic framework. In this algorithm, factorization decomposes the system action into the linear combination of each component's action. The experimental results show that the algorithm proposed in this paper has better time performance and lower system cost compared with other benchmark algorithms. The training time of the former is only 28.5% and 9.12% of that of PPO and DQN algorithms, and the corresponding system cost is decreased by 17.39% and 27.95%, respectively. At the same time, our algorithm has good scalability and is suitable for solving Markov decision-making problems in large-scale state and action space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call