Abstract


 
 
 Several machine learning and deep learning frameworks have been proposed to solve remaining useful life estimation and failure prediction problems in recent years. Having access to the remaining useful estimation or the likelihood of failure in the near future help operators to assess the operating conditions and, therefore, making better repair and maintenance decisions. However, many operators believe remaining useful life estimation and failure prediction solutions are incomplete answers to the maintenance challenge. They would argue that knowing the likelihood of failure in a given time interval or having access to an estimation of the remaining useful life are not enough to make maintenance decisions which minimize the cost while keeping them safe. In this paper, we present a maintenance framework based on off-line deep reinforcement learning which instead of providing information such as likelihood of failure, suggests actions such as “continue the operation” or “visit a repair shop” to the operators in order to maximize the overall profit. Using off-line reinforcement learning makes it possible to learn the optimum maintenance policy from historical data without relying on expensive simulators. We demonstrate the application of our solution in a case study using NASA C-MAPSS dataset.
 
 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call