Abstract
Reinforcement learning-based (RL-based) energy management strategy (EMS) is considered a promising solution for the energy management of electric vehicles with multiple power sources. Research and applications of reinforcement learning and deep reinforcement learning in energy management issues are emerging. However, previous studies have not systematically examined the essential elements of RL-based EMS. This paper presents a performance analysis of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell Electric Vehicle (FCEV). The performance analysis is developed in four aspects: algorithm, perception and decision granularity, hyperparameters, and reward function. The results show that the Off-policy algorithm effectively develops a more fuel-efficient solution within the complete driving cycle compared with other algorithms. Improving the perception and decision granularity reduces the frequency of tabular-based policy updates but better balances battery power and fuel consumption. Setting a high initial SOC in training will effectively improve the performance of RL-based EMS. The construction of an equivalent energy loss reward function for RL-based EMS based on instantaneous State of Charge (SOC) variation should be approached with caution. This approach is highly sensitive to parameters and is more likely to lead to violations of SOC constraints. In contrast, an equivalent energy reward function based on overall SOC variation is a safer alternative.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have