Abstract

This article dedicates to investigating a methodology for enhancing adaptability to environmental changes of reinforcement learning (RL) techniques with data efficiency, by which a joint control protocol is learned using only data for multiagent systems (MASs). Thus, all followers are able to synchronize themselves with the leader and minimize their individual performance. To this end, an optimal synchronization problem of heterogeneous MASs is first formulated, and then an arbitration RL mechanism is developed for well addressing key challenges faced by the current RL techniques, that is, insufficient data and environmental changes. In the developed mechanism, an improved Q-function with an arbitration factor is designed for accommodating the fact that control protocols tend to be made by historic experiences and instinctive decision-making, such that the degree of control over agents' behaviors can be adaptively allocated by on-policy and off-policy RL techniques for the optimal multiagent synchronization problem. Finally, an arbitration RL algorithm with critic-only neural networks is proposed, and theoretical analysis and proofs of synchronization and performance optimality are provided. Simulation results verify the effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.