Abstract

A real-time, data-driven electric vehicle (EVs) routing optimization to achieve energy consumption minimization is proposed in this work. The proposed framework utilizes the concept of Double Deep Q-learning Network (DDQN) in learning the maximum travel policy of the EV as an agent. The policy model is trained to estimate the agent's optimal action per the obtained reward signals and Q-values, representing the feasible routing options. The agent's energy requirement on the road is assessed following Markov Chain Model (MCM), with Markov's unit step represented as the average energy consumption that takes into consideration the different driving patterns, agent's surrounding environment, road conditions, and applicable restrictions. The framework offers a better exploration strategy, continuous learning ability, and the adoption of individual routing preferences. A real-time simulation in the python environment that considered real-life driving data from Google's API platform is performed. Results obtained for two geographically different drives show that the proposed energy consumption minimization framework reduced the energy utilization of the EVs to reach its intended destination by 5.89% and 11.82%, compared with Google's proposed routes originally. Both drives started at 4.30 PM on April 25th, 2019, in Los Angeles, California, and Miami, Florida, to reach EV's charging stations that are located six miles away from both of the starting locations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call