Abstract

The Operation & Maintenance (O&M) of Cyber-Physical Energy Systems (CPESs) is driven by reliable and safe production and supply, that need to account for flexibility to respond to the uncertainty in energy demand and also supply due to the stochasticity of Renewable Energy Sources (RESs); at the same time, accidents of severe consequences must be avoided for safety reasons. In this paper, we consider O&M strategies for CPES reliable and safe production and supply, and develop a Deep Reinforcement Learning (DRL) approach to search for the best strategy, considering the system components health conditions, their Remaining Useful Life (RUL), and possible accident scenarios. The approach integrates Proximal Policy Optimization (PPO) and Imitation Learning (IL) for training RL agent, with a CPES model that embeds the components RUL estimator and their failure process model. The novelty of the work lies in i) taking production plan into O&M decisions to implement maintenance and operate flexibly; ii) embedding the reliability model into CPES model to recognize safety related components and set proper maintenance RUL thresholds. An application, the Advanced Lead-cooled Fast Reactor European Demonstrator (ALFRED), is provided. The optimal solution found by DRL is shown to outperform those provided by state-of-the-art O&M policies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.