Abstract
A well-controlled active suspension system has the potential to provide better ride comfort. Benefiting from its powerful feature extraction and nonlinear generalization capabilities, the deep reinforcement learning (DRL), such as deep deterministic policy gradient (DDPG), has shown great potential to make decisions adaptively and intelligently in the control of active suspension system. However, the DDPG is troubled by the problem of low training efficiency due to the high proportion of illegal strategies. This paper proposed a novel DDPG controller for a nonlinear uncertain active suspension system by combining DRL with expert demonstrations. Specifically, the improved training method integrated with both a pre-training mechanism based on PID expert samples and an adaptive experience replay mechanism, is put forward for the DDPG to achieve both the goals of imitating the expert and improving the training efficiency. Moreover, considering the ride comfort and the state constraints as targets, a mixed reward function is designed to guide RL agents for learning effective actions. It is shown that the proposed training methods effectively accelerate the convergence of the DDPG. Furthermore, the comparison experiments demonstrate that the proposed controller provides great vibration attenuation, and has better adaptiveness to various working conditions and parametric uncertainty.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.