Abstract
We discuss collective decision-making and learning capabilities of social networks in the presence of uncertainty. We present a discrete-time decision-making model for a network of agents in an uncertain environment wherein no agent has a model of the environment evolution. The environment impact on the agent network is captured through a sequence of cost functions, where the costs are revealed to the agents after the agents' decision time. The costs include individual agent costs and local-interaction costs incurred by each agent and its neighbors in the social network. In this model, each agent has a default mixed strategy that stays fixed regardless of the state of the environment, and the agent must expend effort when deviating from this strategy in order to alleviate the impact of the uncertain costs coming from the environment. We construct decentralized agent strategies whereby each agent selects its strategy based only on its related costs and the decisions of its neighbors in the network. In this setting, we quantify social learning in terms of regret, which is given by the difference between the realized network performance over a given time horizon and the best performance that could have been achieved in hindsight by a fictitious centralized entity with full knowledge of the environment's evolution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.