Abstract

This paper considers a sparse signal recovery task in time-varying (time-adaptive) environments. The contribution of the paper to sparsity-aware online learning is threefold; first, a generalized thresholding (GT) operator, which relates to both convex and non-convex penalty functions, is introduced. This operator embodies, in a unified way, the majority of well-known thresholding rules which promote sparsity. Second, a non-convexly constrained, sparsity-promoting, online learning scheme, namely the adaptive projection-based generalized thresholding (APGT), is developed that incorporates the GT operator with a computational complexity that scales linearly to the number of unknowns. Third, the novel family of partially quasi-nonexpansive mappings is introduced as a functional analytic tool for treating the GT operator. By building upon the rich fixed point theory, the previous class of mappings establishes also a link between the GT operator and a union of linear subspaces; a non-convex object which lies at the heart of any sparsity promoting technique, batch or online. Based on this functional analytic framework, a convergence analysis of the APGT is provided. Extensive experiments suggest that the APGT exhibits competitive performance when compared to computationally more demanding alternatives, such as the sparsity-promoting affine projection algorithm (APA)- and recursive least-squares (RLS)-based techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call