Abstract

<p style='text-indent:20px;'>In this paper, an event-triggered reinforcement learning-based met-hod is developed for model-based optimal synchronization control of multiple Euler-Lagrange systems (MELSs) under a directed graph. The strategy of event-triggered optimal control is deduced through the establishment of Hamilton-Jacobi-Bellman (HJB) equation and the triggering condition is then proposed. Event-triggered policy iteration (PI) algorithm is then borrowed from reinforcement learning algorithms to find the optimal solution. One neural network is used to represent the value function to find the analytical solution of the event-triggered HJB equation, weights of which are updated aperiodically. It is proved that both the synchronization error and the weight estimation error are uniformly ultimately bounded (UUB). The Zeno behavior is also excluded in this research. Finally, an example of multiple 2-DOF prototype manipulators is shown to validate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.