This article studies the trajectory imitation control problem of linear systems suffering external disturbances and develops a data-driven static output feedback (OPFB) control-based inverse reinforcement learning (RL) approach. An Expert-Learner structure is considered where the learner aims to imitate expert's trajectory. Using only measured expert's and learner's own input and output data, the learner computes the policy of the expert by reconstructing its unknown value function weights and thus, imitates its optimally operating trajectory. Three static OPFB inverse RL algorithms are proposed. The first algorithm is a model-based scheme and serves as basis. The second algorithm is a data-driven method using input-state data. The third algorithm is a data-driven method using only input-output data. The stability, convergence, optimality, and robustness are well analyzed. Finally, simulation experiments are conducted to verify the proposed algorithms.
Read full abstract