Abstract

The traditional centralized control method hardly meets the coordination control demand for multi-microgrid (MMG), due to the conflict between the individual interests of a single microgrid (MG) and the global interests of MMG. In this study, a distributed coordination control method that integrates potential game (PG) and reinforcement learning (RL) is proposed to achieve balance of interests of an MMG. The proposed method fully exploits the distributed characteristic of the PG by considering each MG as an agent. It also establishes a PG-based distributed coordination control structure to maximize and balance the economy of single MG and overall MMG. Then, it combines the PG with the RL algorithm by the parameter transfer to obtain the optimal Nash equilibrium (NE) solution and improve the optimization performance based on Q-learning algorithm. Eventually, a simulation model is performed in MATLAB to demonstrate the effectiveness and superiority of the proposed control method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call