Abstract

This paper presents an event-triggered adaptive dynamic programming (ETADP) algorithm to study the optimal decentralized control issue of interconnected nonlinear systems subject to stochastic dynamics. By developing a performance index function for augmented auxiliary subsystem, the primordial control issue is converted into deriving an array of optimal control policies sampling in an aperiodic pattern. Then, an ETADP algorithm is introduced under an identifier-actor-critic network framework, where the identifier aims to determine the stochastic dynamic, the critic aims to assess the system performance and the actor aims to implement the control action. A remarkable feature is that the actor-critic updating laws are constructed through the negative gradient method of a positive function, which is designed in the light of the partial derivative of a Hamilton–Jacobi-Bellman equation. Under the provided weight tuning rule, the traditional ADP algorithm can be significantly simplified. The stability of the close-loop system is verified through direct Lyapunov theory, and a numerical simulation is given to confirm the effectiveness of the optimal control scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call