Abstract

This work proposes a novel strategy for social learning by introducing the critical feature of adaptation. In social learning, several distributed agents update continually their belief about a phenomenon of interest through: i) direct observation of streaming data that they gather locally; and ii) diffusion of their beliefs through local cooperation with their neighbors. Traditional social learning implementations are known to learn well the underlying hypothesis (which means that the belief of every individual agent peaks at the true hypothesis), achieving steady improvement in the learning accuracy under stationary conditions. However, these algorithms do not perform well under nonstationary conditions commonly encountered in online learning, exhibiting a significant inertia to track drifts in the streaming data. In order to address this gap, we propose an Adaptive Social Learning (ASL) strategy, which relies on a small step-size parameter to tune the adaptation degree. First, we provide a detailed characterization of the learning performance by means of a steady-state analysis. Focusing on the small step-size regime, we establish that the ASL strategy achieves consistent learning under standard global identifiability assumptions. We derive reliable Gaussian approximations for the probability of error (i.e., of choosing a wrong hypothesis) at each individual agent. We carry out a large deviations analysis revealing the universal behavior of adaptive social learning: the error probabilities decrease exponentially fast with the inverse of the step-size, and we characterize the resulting exponential learning rate. Second, we characterize the adaptation performance by means of a detailed transient analysis, which allows us to obtain useful analytical formulas relating the adaptation time to the step-size. The revealed dependence of the adaptation time and the error probabilities on the step-size highlights the fundamental trade-off between adaptation and learning emerging in adaptive social learning.

Highlights

  • AND MOTIVATIONS OCIAL learning is a collective process whereby some agents form their opinions about a phenomenon of interest through the local exchange of information [2]–[12].Manuscript received April 4, 2020; revised February 9, 2021; accepted June 10, 2021

  • Since the inter-agent dependence is usually not known to the agents, the focus is on marginal distributions, i.e., on the distribution pertaining to any individual agent

  • Social learning is a relevant inferential paradigm lying at the core of many multi-agent systems

Read more

Summary

INTRODUCTION

S OCIAL learning is a collective process whereby some agents form their opinions about a phenomenon of interest through the local exchange of information [2]–[12]. As a matter of fact, the traditional social learning algorithm has a delayed reaction to the change, only perceiving that something has changed at instant i ≈ 350, but still not detecting the true state, because the agent gives maximum credibility to the wrong intermediate hypothesis “cloudy.” After a prohibitive number of iterations, at i ≈ 550, agents manage to overcome their stubbornness and opt for the correct hypothesis “rainy.” This behavior can be problematic for an online algorithm continuously fed by streaming data since, in many practical scenarios, the system operating conditions (e.g., the underlying state of nature as in the introductory example, or the network topology, the quality of data, the statistical models,...) are reasonably expected to undergo some changes over time. Agents maintain some skepticism regarding the true hypothesis, as illustrated in the belief curves of Fig. 2

BACKGROUND
Adaptive Social Learning
ASL–Stochastic Gradient Interpretation
ASL–Bayesian Update Interpretation
STATISTICAL DESCRIPTORS OF THE LEARNING PERFORMANCE
Log-Belief Ratios and Error Probabilities
Log-Likelihood Ratios
STEADY-STATE ANALYSIS
Steady-State Log-Belief Ratios
SMALL-δ ANALYSIS
Consistent Social Learning
Normal Approximation for Small δ
Large Deviations for Small δ
Qualitative Description of the Transient Phase
Quantitative Description of the Transient Phase
VIII. ILLUSTRATIVE EXAMPLES
Consistency
Error Exponents
Asymptotic Normality
EVOLUTION OVER SUCCESSIVE LEARNING CYCLES
CONCLUDING REMARKS
Findings
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.