Abstract
In this paper, we study almost sure convergence of a dynamic average consensus algorithm which allows distributed computation of the product of n time-varying conditional probability density functions. These density functions (often called as “belief functions”) correspond to the conditional probability of observations given the state of an underlying Markov chain, which is observed by n different nodes within a sensor network. The topology of the sensor network is modeled as an undirected graph. The average consensus algorithm is used to obtain a distributed state estimation scheme for a hidden Markov model (HMM). We use the ordinary differential equation (ODE) technique to analyze the convergence of a stochastic approximation type algorithm for achieving average consensus with a constant step size. It is shown that, for a connected graph, under mild assumptions on the first and second moments of the observation probability densities and a geometric ergodicity condition on an extended Markov chain, the consensus filter state of each individual sensor converges almost surely to the true average of the logarithm of the belief functions of all the sensors. Convergence is proved by using a perturbed stochastic Lyapunov function technique. Numerical results suggest that the distributed estimates of the Markov chain state obtained at the individual sensor nodes based on this consensus algorithm track the centralized state estimate (computed on the basis of having access to the observations of all the nodes) quite well, while more formal results on convergence of the distributed HMM filter to the centralized one are currently under investigation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have