Abstract

Reinforcement learning (RL) is a solution with great potential for hybrid electric vehicle (HEV) energy management strategies (EMS). However, traditional deep reinforcement learning (DRL) suffers from inefficiency and poor stability during random exploration in action space, so it is necessary to model some advanced driver experience knowledge and combine it with DRL. Herein, an advanced driver experience (DE) model of traffic congestion level and power matching is constructed based on fuzzy clustering and embedded into DRL. The results show that the DE embedding improves the training convergence efficiency of DRL on a power‐split HEV model, where it improves the convergence of the deep deterministic policy gradient (DDPG) by 46.2%. As DE can better adjust engine operating points and vehicle drive modes under various driving cycles, it enables DDPG to improve fuel economy by ≈6.29% while maintaining the terminal state of charge. This study aims to improve the efficiency of action space exploration and optimize the DRL learning strategy, so as to provide a theoretical basis for the design and development of EMS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call