Abstract

This paper presents an approximate optimal event-triggered control scheme for an N-player multi-input multi-output nonlinear system. The distributed minimizing control policy of each player is obtained co-operatively by introducing a novel performance index such that the Nash equilibrium is attained. The aperiodic control execution instants are optimized for each player by limiting the control policy error, i.e., the error between the continuous and sampled polices, by the worst case threshold, computed by solving the corresponding Hamilton-Jacobi (HJ) equation. The HJ equation is approximately solved using approximate dynamic programming (ADP). A critic neural network is employed at each player to approximate the solution, i.e., optimal value function, with aperiodically available feedback information. Impulsive weight update scheme with event-based bellman error is proposed to guarantee convergence to near optimal solution and closed-loop stability of the event-triggered system. Finally, analysis of Zeno free behavior of the system is included along with numerical simulation results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call