Abstract

This paper considers a problem of distributed active hypothesis testing. At every time instant, individual nodes in the network adaptively choose a sensing action and receive noisy local (private) observations as sensing outcomes. The distribution of observations is parameterized by a discrete parameter (hypotheses). The marginals of the joint observation distribution conditioned on each hypothesis and the action are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first choose a possibly randomized action as a function of their past observations and actions. Nodes then perform a Bayesian update of their belief (distribution estimate) on each hypothesis based on their current local observations. Each node communicates these updates to its neighbors, and then performs a “non-Bayesian” linear consensus using the log-beliefs of its neighbors. Under mild assumptions and for a general class of action selection strategies, we show that the belief of any node on a wrong hypothesis converges to zero exponentially fast, and the exponential rate of learning is characterized by the nodes' influence of the network and average distinguishability between the observations' distributions for the (randomized) action under the true hypothesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call