Abstract

Optimized scheduling of integrated energy systems is of great significance for achieving multi-energy complementarity and economic operation of the system. However, the intermittency of renewable energy sources and the uncertainty of user energy demand cause random fluctuations in the supply and demand sides of the system. Traditional scheduling methods are difficult to accurately adapt to the dynamic changes of the actual environment. In view of the uncertainty associated with renewable energy and load in integrated energy systems, an optimal dispatch method based on deep reinforcement learning is proposed. This study first outlines the methodology of deep reinforcement learning and then presents an optimal dispatch model based on this approach. The model incorporates a well-designed state space, action space, and reward function. Next, the process of model solving using the Asynchronous Advantage Actor-Critic (A3C) algorithm is described. Finally, simulation results demonstrate that the proposed method can adaptively respond to the uncertainty of energy sources and loads, and achieve optimal performance comparable to that of traditional mathematical programming methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call