Abstract

This paper revisits the problem of data-adaptive learning of geomet- ric signal structures based on the Union-of-Subspaces (UoS) model. In contrast to prior work, it motivates and investigates an extension of the classical UoS model, termed the Metric-Constrained Union- of-Subspaces (MC-UoS) model. In this regard, it puts forth two iterative methods for data-adaptive learning of an MC-UoS in the presence of complete and missing data. The proposed methods out- perform existing approaches to learning a UoS in numerical exper- iments involving both synthetic and real data, which demonstrates effectiveness of both an MC-UoS model and the proposed methods. etc., if they could explicitly account for any such relationship be- tween subspaces describing a UoS. In order to capture this intuition of related we put forth a novel extension of the tra- ditional UoS model, termed metric-constrained union-of-subspaces (MC-UoS) model. Heuristically, the MC-UoS model states that sig- nals not only belong to a union of low-dimensional subspaces, but the individual subspaces are also close to each other with respect to a metric defined on the Grassmann manifold. The main challenge in this regard is formulation of methods for data-adaptive learning of subspaces underlying an MC-UoS. We address this challenge by pre- senting two iterative algorithms, termed MiCUSaL and rMiCUSaL, for data-adaptive learning of an MC-UoS using complete data and missing data, respectively. In order to demonstrate effectiveness of both the MC-UoS model and the proposed algorithms, we carry out extensive numerical experiments using synthetic and real data. Re- sults of these experiments demonstrate that both MiCUSaL and rMi- CUSaL outperform existing approaches to data-adaptive learning of a UoS in terms of robustness to noise, outliers, and missing data. Notation: We use lower-case and upper-case letters for vectors and matrices, respectively. In addition, (·) T and tr(·) denote trans- pose and trace operations, respectively, while �·� F and �·� p denote Frobenius norm andp norm of matrices and vectors, respectively. Given a set Ω, AΩ (resp., vΩ) denotes the submatrix of A (resp., subvector of v) corresponding to the rows of A (resp., entries of v) indexed by Ω. Finally, a i denotes the (i, j)-th element of A and v(i) denotes the i-th entry of a vector v.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.