Abstract

This article proposes an adaptive control algorithm for plug-in electric vehicle charging without straining the power system. This control algorithm is decentralized and merely relies on congestion signals generated by sensors deployed across the network, e.g., distribution-level phasor measurement units. To dynamically adjust the parameter of this congestion control algorithm, we cast the problem as multi-agent reinforcement learning where each charging point is an independent agent which learns this parameter using an off-policy actor-critic deep reinforcement learning algorithm. Simulation results on a test distribution network with 33 primary distribution nodes, 1760 low voltage end nodes, and 500 electric vehicles corroborate that the proposed algorithm tracks the available capacity of the network in real-time, prevents transformer overloading and voltage limit violation problems for an extended period of time, and outperforms other decentralized feedback control algorithms proposed in the literature. These results also verify that our control method can adapt to changes in the distribution network such as transformer tap changes and feeder reconfiguration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call