Abstract

In this paper, we study the problem of distributed bias-compensated recursive least-squares (BC-RLS) estimation over multi-agent networks, where the agents collaborate to estimate a common parameter of interest. We consider the situation where both input and output of each agent are corrupted by unknown additive noise. Under this condition, traditional recursive least-squares (RLS) estimator is biased, and the bias is induced by the input noise variance. When the input noise variance is available, the effect of the noise-induced bias can be removed at the expense of an increase in estimation variance. Fortunately, it has been illustrated that distributed collaboration between agents can effectively reduce the variance and can improve the stability of the estimator. Therefore, a distributed incremental BC-RLS algorithm and its simplified version are proposed in this paper. The proposed algorithms can collaboratively obtain the estimates of the unknown input noise variance and remove the effect of the noise-induced bias. Then consistent estimation of the unknown parameter can be achieved in an incremental fashion. Simulation results show that the incremental BC-RLS solutions outperform existing solutions in some enlightening ways.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.