Abstract

Automated self-driving vehicles not only allow of improved energy saving but also better traffic flow. In particular, with the rapid technological advance of autonomous self-driving, technical development of efficient energy consumption for fully electric vehicles (EVs) is becoming progressively important for due to the limited battery capacity and power of EVs. This article proposes a hybrid approach using Deep Reinforcement Learning (DRL) and Model Predictive Control (MPC) for improving the energy economy of the EVs . In this study, the MPC algorithm is used to solve the optimization problem for the speed control of the EVs for minimization of energy consumption in a receding horizon, where the cost value of the horizon is fed into the DRL networks as observed state. Thus, the terminal cost of a state is learned by nearest neighbors and the terminal condition is constrained in a state observed before. The proposed scheme was tested virtually with the high fidelity car simulator (CarSim) and the remaining power of the battery source was sensed during tracing the given track in real-time. Two algorithms for performance evaluation are quantitatively explored and compared in terms of battery energy-saving. The simulation results show that the eco-driving strategy for energy saving of the EVs can be established on road driving in real-time. The performance of energy-saving about average 1% and 17% under simple uphill climbing and multiple ascents and descents, respectively. The results, as an energy efficient planning strategy of the future autonomous and intelligent eco-driving for the EVs, might be possible for guiding and significant practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call