Abstract

This paper investigates the resource allocation problem of networked Euler-Lagrange agents (NELAs) with quantized-data interactions and input saturation in the framework of reinforcement learning (RL). We propose a hierarchical control strategy that includes a distributed resource allocation estimator (DRAE) and a local RL linear sliding mode controller (RL-LSMC). Specifically, the DRAE based on the gradient descent and state feedback is proposed, which aims to achieve optimal resource allocation by estimated states. The local RL-LSMC is designed through utilizing the feedback of critic neural network and the approximation capacity of actor neural network, which prompts the states of the NELAs to track the optimal estimated states. Several sufficient conditions are established with the help of Lyapunov stability argument. Finally, the effectiveness of the proposed hierarchical control algorithm is verified by the two simulation examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call