Abstract

SummaryIn this article, a self‐learning disturbance compensation control method is developed, which enables the unknown discrete‐time (DT) systems to achieve performance optimization in the presence of disturbances. Different from traditional model‐based and data‐driven state feedback control methods, the developed off‐policy Q‐learning algorithm updates the state feedback controller parameters and the compensator parameters by actively interacting with the unknown environment, thus the approximately optimal tracking can be realized using only data. First, an optimal tracking problem for a linear DT system with disturbance is formulated. Then, the design for controller is achieved by solving a zero‐sum game problem, leading to an off‐policy disturbance compensation Q‐learning algorithm with only a critic structure, which uses data to update disturbance compensation controller gains, without the knowledge of system dynamics. Finally, the effectiveness of the proposed method is verified by simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call