Abstract

Deep reinforcement learning (DRL) based car-following control (CFC) models are widely applied in the longitudinal motion control tasks of automated vehicles by self-learning for the optimal control policy. However, DRL algorithms easily produce unsafe commands and have low robustness, especially in complex car-following scenarios. To improve the DRL-based CFC model, this paper combines the deep deterministic policy gradient (DDPG) based CFC model with the deep optical flow estimation (DOFE) based CFC model that can overcome the shortcomings of DDPG-based one which is denoted as cooperative car-following model (DDPGoF). The DDPG-based CFC model utilizes prioritized experience replay which can intrinsically accelerate the learning speed. Meanwhile, the proposed DOFE-based CFC model employs the recurrent all-pairs field transforms algorithm (RAFT) and EfficientNet to perceive the motion variation of the surrounding vehicles, motorcycles, etc. The real vehicle driving data sets are applied to calibrate and validate the proposed DDPGoF-based CFC model while several assessment criteria are established to evaluate its overall performance. As a result, the DDPGoF-based CFC model is superior to DDPG-based one in avoiding crashes, improving car-following stability, riding comfort, and fuel economy of HEV.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call