Abstract
Grid security is threatened by the uncontrolled access of large-scale electric vehicles (EVs) to the grid. The scheduling problem of EVs charging and discharging is described as a Markov decision process (MDP) to develop an efficient charging and discharging scheduling strategy. Furthermore, a deep reinforcement learning (DRL)-based model-free method is suggested to address such issue. The proposed method aims to enhance EVs charging and discharging profits while reducing drivers' electricity anxiety and ensuring grid security. Drivers' electricity anxiety is described by fuzzy mathematical theory, and analyze the effect of EV current power and remaining charging time on it. Variable electricity pricing is calculated by real-time residential load. A dynamic charging environment is constructed considering the stochasticity of electricity prices, driver’s behavior, and residential load. A soft actor-critic (SAC) framework is used to train the agent, which learns the optimal charging and discharging scheduling strategies by interacting with the dynamic charging environment. Finally, simulation with actual data is used to verify that the suggested approach can reduce drivers' charging costs and electricity anxiety while avoiding transformer overload.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.