Abstract

<p style='text-indent:20px;'>In this paper, an event-triggered reinforcement learning-based met-hod is developed for model-based optimal synchronization control of multiple Euler-Lagrange systems (MELSs) under a directed graph. The strategy of event-triggered optimal control is deduced through the establishment of Hamilton-Jacobi-Bellman (HJB) equation and the triggering condition is then proposed. Event-triggered policy iteration (PI) algorithm is then borrowed from reinforcement learning algorithms to find the optimal solution. One neural network is used to represent the value function to find the analytical solution of the event-triggered HJB equation, weights of which are updated aperiodically. It is proved that both the synchronization error and the weight estimation error are uniformly ultimately bounded (UUB). The Zeno behavior is also excluded in this research. Finally, an example of multiple 2-DOF prototype manipulators is shown to validate the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call