Abstract

Although optimal control with full state-feedback has been well studied, online solving output-feedback optimal control problem is difficult, in particular for learning online Nash equilibrium solution of the continuous-time (CT) two-player zero-sum differential games. For this purpose, we propose an adaptive learning algorithm to address this trick problem. A modified game algebraic Riccati equation (MGARE) is derived by tailoring its state-feedback control counterpart. An adaptive online learning method is proposed to approximate the solution to the MGARE through online data, where two operations (i.e., vectorization and Kronecker’s product) can be adopted to reconstruct the MGARE. Only system output information is needed to implement developed learning algorithm. Simulation results are carried out to exemplify the proposed control and learning method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call