Abstract

Distributed estimation is attracting more and more attention due to its scalability, robustness, and low power-consumption. In most distributed estimation algorithms, the output of the system is assumed to be noisy, while the input data is assumed to be accurate. However, in real applications, both of the input and output data may be perturbed by noise. Thus, it is unrealistic to assume that all the entries in the input data are accurate and only those in the output data are corrupted. In the cases of noisy input and output data, the total least-squares (TLS) method has the ability of minimizing the perturbations in both input and output data, and thus provides a better performance than the least-squares (LS)-based method. Besides, many nature and manmade systems present high level of sparsity. In this paper, we consider the case in which both the input and output data are corrupted by noise, and the parameter of interest is sparse. We present several sparsity-aware distributed TLS algorithms for the in-network cooperative estimation problem, in which the $l_{1}$ - or $l_{0}$ -norm penalty term is used to exploit the sparsity of the signal. We then present theoretical analysis on the mean and mean-square performance of the proposed algorithms. In addition, several numerical simulations are given to verify the effectiveness and advantages of these proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call