Abstract

Coordinating the charging process of a large population of electric vehicles (EV) is promising in increasing power grid flexibility from the demand side, yet requires highly scalable control protocols. In contrast to classical decentralized optimization based methods that require approximated distribution network models, this paper frames the EV charging control problem into a multi-agent reinforcement learning (MARL) framework. The MARL-based framework is trained through an actor-critic network and adopts the structure of centralized training and decentralized execution with partial observations. Comparing with model-based approaches, the developed MARL-based approach better captures the attributes of the distribution network, improves grid-level service performance, achieves better network constraints control, reduces the communication load, and achieves a faster response. The efficacy and efficiency of the developed method are verified by simulations on the IEEE 13-bus test feeder.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call