Abstract

In this paper, a model-free reinforcement learning (RL) based distributed control protocol for leader-follower multi-agent systems is presented. Despite successful utilization of RL for learning optimal control protocols in multi-agent systems, the effects of the adversarial inputs are neglected in existing results. The susceptibility of the standard synchronization control protocol against adversarial inputs is shown. Then, a RL-based distributed control framework is developed for multi-agent systems to stop corrupted data of a compromised agent from propagating across the network. To this end, only the leader communicates its actual sensory information and other agents estimate the leader state using a distributed observer and communicate this estimation to their neighbors to reach consensus on the leader state. The observer cannot be physically changed by any adversarial input. Therefore, it guarantees that all intact agents synchronize to the leader trajectory except compromised agent. A distributed control protocol is used to further enhance the resiliency by attenuating the effect of the adversarial inputs on the compromised agent itself. An off-policy RL algorithm is developed to solve the output synchronization control problem online and using only measured data along the system trajectories.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.