Abstract

Dynamic energy dispatch is an integral part of the operation optimization of integrated energy systems (IESs). Most existing dynamic dispatch schemes depend heavily on explicit forecast or mathematical models of the future uncertainties. Due to the randomness of renewable energy generation and energy demands, these approaches are limited by the accuracy of forecasting or model. A novel model-free dynamic dispatch strategy for IES based on improved deep reinforcement learning (DRL) is proposed to solve the problem. The IES dynamic dispatch problem is formulated as a Markov decision process (MDP), in which the uncertainties of renewable generation, electric load and heat load are considered. For solving the MDP, an improved deep deterministic policy gradient (DDPG) algorithm using prioritized experience replay mechanism and L2 regularization is developed, so as to improve the policy quality and learning efficiency of the dispatch strategy. The proposed approach does not require any forecast information or distribution knowledge, and can adaptively respond to the stochastic fluctuations of the supply and demands. Simulation results show the proposed dispatch strategy has faster convergence and lower operating costs than original DDPG-based strategy. In addition, the advantages of the proposed approach in terms of cost-effectiveness and stochastic environmental adaptation are validated.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.