Abstract

We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-related term. The varentropy-related term allows us to control the rate of convergence of our update rule, which also reuses some of the most recent observations of each agent to speed up convergence. Under mild technical conditions, we show that the belief of each agent concentrates on the optimal hypothesis set, and we derive a bound for the convergence rate. Furthermore, to overcome the performance degradation due to misinforming agents, who use a corrupted likelihood functions in their belief updates, we propose to use multiple social networks that update their beliefs independently and a convex combination mechanism among the beliefs of all the networks. Simulations with applications to location identification and group recommendation demonstrate that our proposed methods offer improvements over two other current state-of-the art non-Bayesian social learning algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.