Abstract

The microgrid is a solution for integrating renewable energy resources into the power system. However, overcoming the randomness of these nature-based resources requires a robust control system. Moreover, electricity market participation and ancillary service provision for the utility grid are other aspects, although intensify microgrid penetration makes its environment interactions more complex. Reinforcement learning is a technique vastly applied to such an intricate environment. Hence, in this paper, we deployed deep deterministic policy gradient and soft-actor critic methods to solve the high-dimensional, continuous, and stochastic problem of the microgrid's energy management system and compared the performance of two methods. Additionally, we developed the microgrid interactions with the utility grid as a participant of system integrity protection schema responding promptly to the utility grid protection requirements based on its reliable available resources. Moreover, we applied actual data of Gasa Island microgrid in Korea to prove the efficiency of proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call