Abstract

In this paper, we present a Q-learning framework for solving finite-horizon zero-sum game problems involving the H∞ control of linear system without knowing the dynamics. Research in the past mainly focused on solving problems in infinite horizon with completely measurable state. However, in the practical engineering, the system state is not always directly accessible, and it is difficult to solve the time-varying Riccati equation associated with the finite-horizon setting directly either. The main contribution of the proposed model-free algorithm is to determine the optimal output feedback policies without measurement state in finite-horizon setting. To achieve this goal, we first describe the Q-function caused by finite-horizon problems in the context of state feedback, then we parameterize the Q-functions as input–output vectors functions. Finally, the numerical examples on aircraft dynamics demonstrate the algorithm’s efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call