Abstract

In responseto low-carbon requirements, a large amount of renewable energy resources (RESs) have been deployed in power systems; nevertheless, the intermittency of RESs raises the system vulnerability and even causes severe damage under extreme events. Electric vehicles (EVs), owing to their mobility and flexibility characteristics, can provide various ancillary services meanwhile enhancing system resilience. The distributed control of EVs under such scenarios in power-transportation network becomes a complex decision-making problem with enormous dynamics and uncertainties. To this end, a multiagent reinforcement learning method is proposed to compute both discrete and continuous actions simultaneously that aligns with the nature of EV routing and scheduling problems. Furthermore, the proposed method can enhance the learning stability and scalability with privacy perseverance in the multiagent setting. Simulation results based on IEEE 6- and 33-bus power networks integrated with transportation systems validate its effectiveness in providing system resilience and carbon intensity service.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call