Abstract

The large-scale renewable energy integration has brought challenges to energy management in modern power systems. Due to the strong randomness and volatility of renewable energy, traditional model-based methods may become insufficient for optimal active power dispatch. To tackle the challenge, this paper proposes an autonomous control method based on soft actor–critic (SAC), a deep-reinforcement learning (DRL) strategy recently developed, which provides an optimal solution for active power dispatch without a mathematical model while improving the renewable energy consumption rate under stable operation. A Lagrange multiplier is introduced to the SAC (LM-SAC) to promote algorithm performance in optimal active power dispatch. A pre-trained scheme based on imitation learning (IL-SAC) is also designed to further improve the training efficiency and robustness of the DRL agent. Simulations on the IEEE 118-bus system with the open platform Grid2Op verify that the proposed algorithm effectively achieves better renewable energy consumption rate and robustness compared with existing DRL algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call