This article dedicates to investigating a methodology for enhancing adaptability to environmental changes of reinforcement learning (RL) techniques with data efficiency, by which a joint control protocol is learned using only data for multiagent systems (MASs). Thus, all followers are able to synchronize themselves with the leader and minimize their individual performance. To this end, an optimal synchronization problem of heterogeneous MASs is first formulated, and then an arbitration RL mechanism is developed for well addressing key challenges faced by the current RL techniques, that is, insufficient data and environmental changes. In the developed mechanism, an improved Q-function with an arbitration factor is designed for accommodating the fact that control protocols tend to be made by historic experiences and instinctive decision-making, such that the degree of control over agents' behaviors can be adaptively allocated by on-policy and off-policy RL techniques for the optimal multiagent synchronization problem. Finally, an arbitration RL algorithm with critic-only neural networks is proposed, and theoretical analysis and proofs of synchronization and performance optimality are provided. Simulation results verify the effectiveness of the proposed method.
Read full abstract