Abstract
This article investigates the event-triggered optimized tracking control problem for stochastic nonlinear systems based on reinforcement learning (RL). By using the backstepping strategy, an adaptive RL algorithm is performed under the identifier-critic-actor architecture to achieve event-triggered optimized control (ETOC). Moreover, a novel dynamically adjustable event-triggered mechanism is delicately designed, which adjusts the triggering threshold online to economize communication resources and reduce the computation burden. To overcome the difficulty that the virtual control signals are discontinuous due to the state-triggering, the virtual controllers are designed with the continuous sampling states signals, and the actual optimal controller is redesigned by using the triggered states in the last step. Furthermore, the proposed ETOC in this paper has significant advantages in terms of saving network resources because the event-triggered mechanism is employed in the sensor-to-controller channel and the event-sampled states are utilized to directly activate the control actions. Finally, it can be guaranteed that all signals of the stochastic system are bounded under the presented ETOC method. A simulation example is carried out to illustrate the effectiveness of the proposed ETOC algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.