Abstract

The multi-microgrid (MMG) system has attracted more and more attention due to its low carbon emissions and flexibility. This paper proposes a multi-agent reinforcement learning algorithm for real-time energy management of an MMG. In this problem, the MMG is connected to a distribution network (DN). The distribution network operator (DSO) and each microgrid (MG) are modeled as autonomous agents. Each agent makes decisions to suit its interests based on local information. The decision-making problem of multiple agents is modeled as a Markov game and solved by the prioritized multi-agent deep deterministic policy gradient (PMADDPG), where only local observation is required for each agent to make decisions, the centralized training mechanism is applied to learn coordination strategy, and a prioritized experience replay (PER) strategy is adopted to improve learning efficiency. The proposed method can deal with the non-stationary problems in the process of a multi-agent game with partial observable information. In the execution stage, all trained agents are deployed in a distributed manner and make decisions in real time. Simulation results show that according to the proposed method, the training process of a multi-agent game is accelerated, and multiple agents can make optimal decisions only by local information.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.