To address the air pollution problems and reduce greenhouse gas emissions (GHG), plug-in hybrid electric vehicles (PHEV) have been developed to achieve higher fuel efficiency. The Energy Management System (EMS) is a very important component of a PHEV in achieving better fuel economy and it is a very active research area. So far, most of the existing EMS strategies are either just simply following predefined rules that are not adaptive to changing driving conditions; or heavily relying on accurate prediction of future traffic conditions. Deep learning algorithms have been successfully applied to many complex problems and proved to even outperform human beings in some tasks (e.g., play chess) in recent years, which shows the great potential of such methods in practical engineering problems. In this study, a deep reinforcement learning (Deep Q-network or DQN) based PHEV energy management system is designed to autonomously learn the optimal fuel/electricity splits from interactions between the vehicle and the traffic environment. It is a fully data-driven and self-learning model that does not rely on any prediction, predefined rules or even prior human knowledge. The experiment results show that the proposed model is capable of achieving 16.3% energy savings (with the designed PHEV simulation model) on a typical commute trip, compared to conventional binary control strategies. In addition, a dueling Deep Q-network with dueling structure (DDQN) is also implemented and compared with single DQN in particular with respect to the convergence rate in the training process.
Read full abstract