Abstract

This paper addresses the resilient consensus problem in the presence of faulty nodes whose state updating is randomly unreliable. Unlike the existing approaches using elimination of extreme neighbor nodes with state updating or trustworthiness evaluation of neighbor nodes by historical information, this paper presents a novel multi-armed bandit based algorithm. The idea is to increase the selection probability of the so-called healthy running subsets against that of the unhealthy running subsets, according to the evaluation of the reward and credibility functions. As a result, the normal nodes in a network can achieve consensus with the influence from the faulty nodes mitigated. The algorithm can also be applied in a social network with antagonistic weights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call