Abstract

This study aims to determine the optimal deployment plan for EV fast charging stations in a transportation network with a limited budget. The objective of the deployment problem is to maximize the quality of service (QoS) with respect to both waiting time and range anxiety from the perspective of EV customers. With the rapid growth of the electric vehicle (EV) market penetration, state-of-the-art algorithms based on mathematical programming are limited in handling high-dimensional optimization problems adequately. Unlike previous studies, we make the first attempt to formulate the fast charging station deployment problem (FCSDP) as a finite discrete Markov decision process (MDP) in a novel reinforcement learning (RL) framework to alleviate the curse of dimensionality problem. Since creating a supervised training dataset is impractical due to the high computational complexity of the FCSDP, we propose a recurrent neural network (RNN) with an attention mechanism to learn the model parameters and determine the optimal policy in a completely unsupervised manner. Finally, numerical experiments are conducted on multiple problem sizes to evaluate the performance of the RNN-based RL framework. Simulation results show that the proposed approach outperforms the comparing algorithms in terms of solution quality and computation time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call