Grid security is threatened by the uncontrolled access of large-scale electric vehicles (EVs) to the grid. The scheduling problem of EVs charging and discharging is described as a Markov decision process (MDP) to develop an efficient charging and discharging scheduling strategy. Furthermore, a deep reinforcement learning (DRL)-based model-free method is suggested to address such issue. The proposed method aims to enhance EVs charging and discharging profits while reducing drivers' electricity anxiety and ensuring grid security. Drivers' electricity anxiety is described by fuzzy mathematical theory, and analyze the effect of EV current power and remaining charging time on it. Variable electricity pricing is calculated by real-time residential load. A dynamic charging environment is constructed considering the stochasticity of electricity prices, driver’s behavior, and residential load. A soft actor-critic (SAC) framework is used to train the agent, which learns the optimal charging and discharging scheduling strategies by interacting with the dynamic charging environment. Finally, simulation with actual data is used to verify that the suggested approach can reduce drivers' charging costs and electricity anxiety while avoiding transformer overload.
Read full abstract