Abstract

We characterize the performance of sequential information-guided sensing (Info-Greedy Sensing) when the model parameters (means and covariance matrices) are estimated and inaccurate. Our theoretical results focus on Gaussian signals and establish performance bounds for signal estimators obtained by Info-Greedy Sensing, in terms of conditional entropy (related to the estimation error) and additional power required due to inaccurate models. We also show covariance sketching can be used as an efficient initialization for Info-Greedy Sensing. Numerical examples demonstrate the good performance of Info-Greedy Sensing algorithms compared with random measurement schemes in the presence of model mismatch.

Highlights

  • Sequential compressed sensing is a promising new information acquisition and recovery technique to process big data that arises in various applications such as compressive imaging [1,2,3], power network monitoring [4], and large-scale sensor networks [5]

  • We focus on analyzing deterministic model mismatch, which is a reasonable assumption since we aim at providing instance-specific performance guarantees with sample estimated or sketched initial parameters

  • 2.4 Gaussian mixture model signals In this subsection we introduce the case of sensing Gaussian mixture model (GMM) signals

Read more

Summary

Introduction

Sequential compressed sensing is a promising new information acquisition and recovery technique to process big data that arises in various applications such as compressive imaging [1,2,3], power network monitoring [4], and large-scale sensor networks [5]. To harvest the benefits of adaptivity in sequential compressed sensing, various algorithms have been developed (see [6] for a review) We may classify these algorithms as (1) being agnostic about the signal distribution and, random measurements are used [7,8,9,10], (2) exploiting additional structure of the signal (such as graphical structure [11], sparse [12,13,14], low rank [15], and treesparse structure [16, 17]) to design measurements, and (3) exploiting the distributional information of the signal in choosing measurements [18], possibly through maximizing mutual information.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call