Abstract

This paper investigates a parameter-free H∞ differential game approach for nonlinear active vehicle suspensions. The study accounts for the geometric nonlinearity of the half-car active suspension and the cubic nonlinearity of the damping elements. The nonlinear H∞ control problem is reformulated as a zero-sum game between two players, leading to the establishment of the Hamilton–Jacobi–Isaacs (HJI) equation with a Nash equilibrium solution. To minimize reliance on model parameters during the solution process, an actor–critic framework employing neural networks is utilized to approximate the control policy and value function. An off-policy reinforcement learning method is implemented to iteratively solve the HJI equation. In this approach, the disturbance policy is derived directly from the value function, requiring only a limited amount of driving data to approximate the HJI equation’s solution. The primary innovation of this method lies in its capacity to effectively address system nonlinearities without the need for model parameters, making it particularly advantageous for practical engineering applications. Numerical simulations confirm the method’s effectiveness and applicable range. The off-policy reinforcement learning approach ensures the safety of the design process. For low-frequency road disturbances, the designed H∞ control policy enhances both ride comfort and stability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.