Abstract

This paper aims to crack the individual EV charging scheduling problem considering the dynamic user behaviors and the electricity price. The uncertainty of the EV charging demand is described by several factors, including the driver’s experience, the charging preference and the charging locations for realistic scenarios. An aggregate anxiety concept is introduced to characterize both the driver’s anxiety on the EV’s range and uncertain events. A mathematical model is also provided to describe the anxiety quantitatively. The problem is formulated as a Markov Decision Process (MDP) with an unknown state transition function. The objective is to find the optimal sequential charging decisions that can balance the charging cost and driver’s anxiety. A model-free deep reinforcement learning (DRL) based approach is developed to learn the optimal charging control strategy by interacting with the dynamic environment. The continuous soft actor-critic (SAC) framework is applied to design the learning method, which contains a supervised learning (SL) stage and a reinforcement learning (RL) stage. Finally, simulation studies verify the effectiveness of the proposed approach under dynamic user behaviors at different charging locations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call