Abstract

This paper presents a Deep Reinforcement Learning (DRL)-based optimization approach for determining the optimal inspection and maintenance planning of a scrap-based steel production line. The DRL-based optimization maintenance recommends the adequate time for inspections and maintenance activities based on the monitoring conditions of the production line, such as machine productivity, buffer level, and production demand. Some practical aspects of the system, such as such uncertainty of the maintenance duration and the variable production rate of the machines, were considered. A scrap-based steel production line was modeled as a multi-component system considering components dependencies. A simulation model was developed to simulate the dynamics of the system and assist with the development of the DRL maintenance approach. The proposed DRL-based maintenance is compared with traditional maintenance policies, such reactive maintenance, time-based maintenance, and condition-based maintenance. In addition, different DRL algorithms such as PPO (Proximal Policy Optimization), TRPO (Trust Region Policy Optimization), and DQN (Deep Q-Network) are investigated in the case-based scenario. The findings indicated the potential for significant financial savings. Therefore, the proposed maintenance approach demonstrates system adaptability and has the potential to be a powerful tool for industrial competitiveness.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.