Abstract

Fuel cell hybrid electric vehicles offer a promising solution for sustainable and environment friendly transportation, but they necessitate efficient energy management strategies (EMSs) to optimize their fuel economy. However, designing an optimal leaning-based EMS becomes challenging in the presence of limited training data. This paper presents a deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. The approach introduces a deep REINFORCE framework with a deep neural network baseline and entropy regularization to develop a stochastic policy for EMS. The performance of the proposed approach is benchmarked against three EMSs: i) a state-of- art deep deterministic reinforcement learning technique called Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). Using New York City cycle as a validation drive cycle, the deep REINFORCE approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. Moreover, the training results show that the proposed algorithm reduces training time by 38% compared to the DDQN approach. The proposed deep REINFORCE-based EMS shows superiority not only in terms of fuel economy, but also in terms of dealing with epistemic uncertainty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call