Abstract

We consider learning an environment by a team of spatially distributed agents, each agent with ongoing local measurements and the team of agents linked by a sparse directed communication network. Agents exploit their local measurements and exchange messages with neighbors in the communication network to quickly learn which environment—from a given finite set of presumed ones—is active. Works on such distributed learning setup assumed that the local measurements taken by the agents are uncorrelated over time, a convenient assumption that leaves, however, many practical setups excluded. In this letter, via modifying a recent distributed learning algorithm, we expand distributed learning to environments in which the time correlation of the local measurements is arbitrary. Since correlated measurements invalidate the previous proofs, we need a new proof guaranteeing that the modified distributed learning algorithm succeeds. Our new proof technique is simple, relies only on basic tools such as the Perron–Frobenius and the ergodic Markov chain theorems, and even covers the case of mismatched assumptions—the active environment is not in the finite set of presumed ones. A numerical example confirms the novel theoretical findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call