Abstract

Distributed drive electric vehicles are regarded as a broadly promising transportation tool owing to their convenience and maneuverability. However, reasonable and efficient allocation of torque demand to four wheels is a challenging task. In this paper, a deep reinforcement learning-based torque distribution strategy is proposed to guarantee the active safety and energy conservation. The torque distribution task is explicitly formulated as a Markov decision process, in which the vehicle dynamic characteristics can be approximated. The actor-critic networks are utilized to approximate the action value and policy functions for a better control effect. To guarantee continuous torque output and further stabilize the learning process, a twin delayed deep deterministic policy gradient algorithm is deployed. The motor efficiency is incorporated into the cumulative reward to reduce the energy consumption. The results of double lane change demonstrate that the proposed strategy results in better handling stability performance. In addition, it can improve the vehicle transient response and eliminate the static deviation in the step steering maneuver test. For typical steering maneuvers, the proposed direct torque distribution strategy significantly improves the average motor efficiency and reduces the energy loss by 5.25%–10.51%. Finally, a hardware-in-loop experiment was implemented to validate the real-time executability of the proposed torque distribution strategy. This study provides a foundation for the practical application of intelligent safety control algorithms in future vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call