Abstract

In this article, an adaptive active power rolling dispatch strategy based on distributed deep reinforcement learning is proposed to deal with the uncertainty of high-proportioned renewable energy. For each agent, by using recurrent neural network layers and graph attention layers in its network structure, we aim to improve the generalization ability of the multiple agents in active power flow control. Furthermore, a regional graph attention network algorithm, which can effectively help agents aggregate the regional information of their neighborhood, is proposed to improve the information capture ability of agents. We adopt the structure of ‘centralized training, distributed execution’ to help agents improve the effectiveness of proposed methods in dynamic environments. The case studies demonstrate that the proposed algorithm can help multi-agents learn effective active power control strategies. Each agent has a strong generalization ability in terms of time granularity and network topology. We expect that such an approach can improve the practicability and adaptability of distributed AI method on power system control issues.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call