Abstract

AbstractSocial learning is a collective approach to decentralised decision-making and is comprised of two processes; evidence updating and belief fusion. In this paper we propose a social learning model in which agents’ beliefs are represented by a set of possible states, and where the evidence collected can vary in its level of imprecision. We investigate this model using multi-agent and multi-robot simulations and demonstrate that it is robust to imprecise evidence. Our results also show that certain kinds of imprecise evidence can enhance the efficacy of the learning process in the presence of sensor errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call