In this article, an adaptive optimized consensus tracking control problem is studied for nonlinear strict-feedback dynamic multi-agent systems (MASs), considering both unmeasurable system states and time-varying bias faults. By utilizing the backstepping technique, we develop an adaptive reinforcement learning (RL) algorithm within the observer-critic-actor architecture, specially designed to compensate for the lack of state information and derive control inputs, thereby achieving approximate optimal control. Moreover, an event-triggered mechanism is introduced in the sensor-to-controller channel, which dynamically adjusts the triggering threshold online and employs event-sampled states to initiate control actions. To address discontinuities caused by state triggering, we construct virtual controllers that continuously sample state signals and reconfigure the actual controller based on previously triggered states. The outputs of the MASs are shown to accurately track the desired reference signals while ensuring the boundedness of all closed-loop signals. Additionally, the proposed controller is verified to be devoid of Zeno behavior. Finally, the effectiveness of our control methodology is demonstrated through numerical simulation.
Read full abstract