Abstract
An optimal robust control data-driven learning solution is proposed for an active suspension system. The problem is formulated as a zero-sum two-player differential game (ZS-TP-DG), where the optimal control law and the worst-case disturbance control law must be searched for. The distinctive features of the proposed solution are: a Q-learning-like data-driven model-free (with unknown process dynamics) algorithm relying on collected input-state data from the process; neural networks being used as generic function approximators; validation on an active suspension system that is easily amenable to artificial road profile disturbance generation. The superiority of the ZS-TP-DG controller over another optimal controller learned in a disturbance-free context is validated and proven.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have