Abstract

In microgrids, policy decisions about electricity trading and scheduling are of paramount importance. These decisions are influenced by various factors, including the microgrid’s infrastructure, energy production and demand, and the competitiveness of the energy market. However, due to the intricate nature of these elements, determining the optimal strategy for a microgrid can be a challenging task. To address this challenge without the need for complex system modeling, we introduce a novel approach using Multi-Agent Reinforcement Learning (MARL) augmented with an attention mechanism. In this approach, each microgrid is treated as a distinct agent, and through interactions with other agents, they learn how to effectively coordinate the utilization of energy resources and engage in energy trading. To facilitate effective agent training, we employ an attention mechanism that enables agents to selectively focus on pertinent background information. Once trained, each agent can make control decisions based solely on its local knowledge. This not only safeguards the privacy of individual microgrids but also reduces communication overhead, making decentralized control feasible. We implement this approach in MATLAB R2020a to create a simulation environment and assess its performance. Our experimental results indicate that our proposed strategy significantly reduces the operational costs of microgrids in comparison to conventional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call