Abstract

Due to uncertainties in renewable energy generation and load demands, traditional energy dispatch schemes for an integrated electricity–gas system (IEGS) considerably depend on explicit forecast mathematical models. In this study, a novel data-driven deep reinforcement learning method is applied to solve the IEGS dynamic dispatch problem with the targets of minimizing carbon emission and operating cost. Moreover, a flexible operation of carbon capture system and power-to-gas facility is proposed to attain low operating costs. The IEGS dynamic dispatch problem is formulated as a Markov game, and a soft actor–critic (SAC) algorithm is applied to learn the optimal dispatch solution. To improve training efficiency and convergence, prioritized experience replay (PER) is employed. In the simulation, the proposed PER–SAC algorithm compared with deep Q-network and SAC has fast and stable learning performance. In contrast to a modified sequential quadratic programming based on uncertainty prediction, the proposed method can reduce the target cost by 11.62% when the prediction error exceeds 10%. The computational time of scenario analysis solution on the same hardware platform is 4.58 times than that of training the PER–SAC method. Finally, the simulation results under different scenarios demonstrate that the PER–SAC-based dispatch strategy has satisfactory generalization and adaptability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call