Abstract
This paper presents an event-triggered adaptive dynamic programming (ETADP) algorithm to study the optimal decentralized control issue of interconnected nonlinear systems subject to stochastic dynamics. By developing a performance index function for augmented auxiliary subsystem, the primordial control issue is converted into deriving an array of optimal control policies sampling in an aperiodic pattern. Then, an ETADP algorithm is introduced under an identifier-actor-critic network framework, where the identifier aims to determine the stochastic dynamic, the critic aims to assess the system performance and the actor aims to implement the control action. A remarkable feature is that the actor-critic updating laws are constructed through the negative gradient method of a positive function, which is designed in the light of the partial derivative of a Hamilton–Jacobi-Bellman equation. Under the provided weight tuning rule, the traditional ADP algorithm can be significantly simplified. The stability of the close-loop system is verified through direct Lyapunov theory, and a numerical simulation is given to confirm the effectiveness of the optimal control scheme.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.