Abstract

In this article, we propose learning and control strategies for a semiactive suspension system in a full car using soft actor-critic (SAC) models on real roads, where many road profiles with various power of disturbance exist (e.g., speed bumps and general roads). Therefore, a technique that enables deep reinforcement learning to cover different domains with largely different reward functions is proposed. This concept was first realized in a simulation environment. Our proposed switching learning system continuously identifies two different road disturbance profiles in real time such that the appropriately designed SAC model can be learned and applied accordingly. The results of the proposed switching SAC algorithm were compared against those of advanced and conventional benchmark suspension systems. Based on the results, the proposed algorithm showed smaller root-mean-square values of the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$z$</tex-math></inline-formula> -directional acceleration and pitch at the center of the body mass. Finally, we also presented our successfully implemented SAC training system in a real car on real roads. The trained SAC model outperforms conventional controllers reducing the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$z$</tex-math></inline-formula> -directional acceleration and pitch, similar to the simulation results, which is highly related to the riding comfort and vehicle maneuverability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call