Abstract

AbstractRide comfort plays an important role in determining the public acceptance of autonomous vehicles (AVs). Many factors, such as road profile, driving speed, and suspension system, influence the ride comfort of AVs. This study proposes a hierarchical framework for improving ride comfort by integrating speed planning and suspension control in a vehicle‐to‐everything environment. Based on safe, comfortable, and efficient speed planning via dynamic programming, a deep reinforcement learning‐based suspension control is proposed to adapt to the changing pavement conditions. Specifically, a deep deterministic policy gradient with external knowledge (EK‐DDPG) algorithm is designed for the efficient self‐adaptation of suspension control strategies. The external knowledge of action selection and value estimation from other AVs are combined into the loss functions of the DDPG algorithm. In numerical experiments, real‐world pavements detected in 11 districts of Shanghai, China, are applied to verify the proposed method. Experimental results demonstrate that the EK‐DDPG‐based suspension control improves ride comfort on untrained rough pavements by 27.95% and 3.32%, compared to a model predictive control (MPC) baseline and a DDPG baseline, respectively. Meanwhile, the EK‐DDPG‐based suspension control improves computational efficiency by 22.97%, compared to the MPC baseline, and performs at the same level as the DDPD baseline. This study provides a generalized and computationally efficient approach for improving the ride comfort of AVs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call