Abstract

Electric vehicle (EV) has become one of the most critical components in the smart grid with the applications of the Internet-of-Things (IoT) technologies. Real-time charging control is pivotal to ensure the efficient operation of EVs. However, the charging control performance is limited by the uncertainty of the environment. On the other hand, it is challenging to determine a charging control strategy that is able to optimize multiple objectives simultaneously. In this article, we formulate the EV charging control model as a Markov decision process (MDP) by constructing state, action, transition function, and reward. Then, we propose a deep-reinforcement-learning-based approach: charging control deep deterministic policy gradient (CDDPG) to learn the optimal charging control strategy for satisfying the user’s requirement of battery energy while minimizing the user’s charging expense. We utilize the long short-term memory (LSTM) network that extracts the information of previous energy price to determine the current charging control strategy. Moreover, Gaussian noise is added to the output of the actor network to prevent the agent from sticking into the nonoptimal strategy. In addition, we address the limitation of sparse rewards by using two replay buffers, of which one is used to store the rewards during the charging phase and another is used to store the rewards after charging is completed. The simulation results prove that the CDDPG-based approach outperforms the deep- $Q$ -learning-based approach (DQL) and the deep-deterministic-policy-gradient-based approach (DDPG) in satisfying the user’s requirement for the battery energy and reducing the charging cost.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.