Abstract

While deep reinforcement learning (DRL) based energy management strategies (EMSs) have shown potential for optimizing energy utilization in recent years, challenges such as convergence difficulties and suboptimal control still persist. In this research, a novel DRL algorithm, i.e. an improved soft actor-critic (ISAC) algorithm is applied to the EMS of a heavy-duty hybrid electric vehicles (HDHEV) with dual auxiliary power units (APU) in which the priority experience replay (PER), emphasizing recent experience (ERE) and Muchausen reinforcement learning (MRL) methods are adopted to improve the convergence performance and the HDHEV fuel economy. Simultaneously, a bus voltage calculation model suitable for dual-APUs is proposed and validated using real-world data to ensure the precision of the HDHEV model. Results indicate that the proposed EMS reduces HDHEV fuel consumption by 4.59 % and 2.50 % compared to deep deterministic policy gradient (DDPG) and twin delayed deep deterministic policy gradient (TD3)-based EMSs respectively, narrowing the gap to dynamic programming-based EMS to 7.94 %. The proposed EMS exhibits superior training performance, with a 91.28 % increase in convergence speed compared to other DRL-based EMSs. Furthermore, the ablation experiments also validate the effectiveness of each method in the proposed EMS for the SAC algorithm, further demonstrating its superiority.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.