Abstract

In this paper, an event-triggered Reinforcement Learning (RL) method is proposed for the optimal attitude consensus of multiple rigid body networks with unknown dynamics. Firstly, the consensus error is constructed through the attitude dynamics. According to the Bellman optimality principle, the implicit form of the optimal controller and the corresponding Hamilton-Jacobi-Bellman (HJB) equations are obtained. Because of the augmented system, the optimal controller can be obtained directly without relying on the system dynamics. Secondly, the self-triggered mechanism is applied to reduce the computing and communication burden when updating the controller. In order to address the problem that the HJB equations are difficult to solve analytically, an RL method which only requires measurement data at the event-triggered instants is proposed. For each agent, only one neural network is designed to approximate the optimal value function. Each neural network is updated only at the event-triggered instants. Meanwhile, the Uniformly Ultimately Bounded (UUB) of the closed-loop system is obtained, and Zeno behavior is also avoided. Finally, the simulation results on a multiple rigid body network demonstrate the validity of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call