In this article, an optimized leader-follower consensus control is proposed for a class of second-order unknown nonlinear dynamical multiagent system. Different with the first-order multiagent consensus, the second-order case needs to achieve the agreement not only on position but also on velocity, therefore this optimized control is more challenging and interesting. To derive the control, reinforcement learning (RL) can be a natural consideration because it can overcome the difficulty of solving the Hamilton–Jacobi–Bellman (HJB) equation. To implement RL, it needs to iterate both adaptive critic and actor networks each other. However, if this optimized control learns RL from most existing optimal methods that derives the critic and actor adaptive laws from the negative gradient of square of the approximating function of the HJB equation, this control algorithm will be very intricate, because the HJB equation correlated to a second-order nonlinear multiagent system will become very complex due to strong state coupling and nonlinearity. In this work, since the two RL adaptive laws are derived via implementing the gradient descent method to a simple positive function, which is obtained on the basis of a partial derivative of the HJB equation, this optimized control is significantly simple. Meanwhile, it not merely can avoid the requirement of known dynamic acknowledge, but also can release the condition of persistent excitation, which is demanded in most RL optimization methods for training the adaptive parameter more sufficiently. Finally, the proposed control is demonstrated by both theory and computer simulation.
Read full abstract