Abstract

The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication. It introduces separably estimable observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the NU (with its linear counterpart LU) and the NLU. Their update rule combines a consensus step (where each sensor updates the state by weight averaging it with its neighbors' states) and an innovation step (where each sensor processes its local current observation). This makes the three algorithms of the consensus + innovations type, very different from traditional consensus. This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness. For LU and NU, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms LU and NU are analyzed in the framework of stochastic approximation theory; algorithm NLU exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in this paper.

Highlights

  • We present three distributed consensus+innovations inference algorithms: LU for linear observation models and two algorithms, N U and N LU, for nonlinear observation models The paper introduces the conditions on the sensor observations model and on the communication network for the distributed estimates to converge

  • We show consistency for N LU under weaker assumptions than for N U (Lipschitz continuity plus growth conditions.) On the other hand, when these more stringent conditions hold, N U provides convergence rate guarantees and asymptotic normality; these follow from standard stochastic approximation theory that apply to N U but not to N LU

  • We study their asymptotic properties, namely: consistency, asymptotic unbiasedness, and for the ALU and N U algorithms their asymptotic normality

Read more

Summary

Background and Motivation

The paper studies distributed inference, in particular, distributed estimation, as consensus+innovations algorithms that generalize distributed consensus by combining, at each time step, cooperation among agents (consensus) with assimilation of their observations (innovations). We present three distributed consensus+innovations inference algorithms: LU for linear observation models (as when each sensor makes a noisy reading of the temperature at its location, see Section II-E;) and two algorithms, N U and N LU, for nonlinear observation models (like in power grids when each sensor measures a phase differential through a sinusoidal modulation, see Section IV-D.) The paper introduces the conditions on the sensor observations model (separable estimability that we define) and on the communication network (connectedness on average) for the distributed estimates to converge. Apart from treating generic separably estimable nonlinear observation models, in the linear case, our algorithms N U and LU lead to asymptotic normality in addition to consistency and asymptotic unbiasedness in random time-varying networks with quantized inter-sensor communication and sensor failures.

Notation
DISTRIBUTED LINEAR PARAMETER ESTIMATION
Problem Formulation
Asymptotic Variance
An Example
Some generalizations
NONLINEAR OBSERVATION MODELS
Nonlinear Observation Models
Algorithm N U and Assumptions
Algorithm N U
Algorithm N LU
Consistency and Asymptotic Unbiasedness of N LU
Application
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call