Abstract

Existing Dictionary Learning and Sparse Coding (DLSC) algorithms for Symmetric Positive Definite (SPD) matrices usually adopt Reproducing Kernel Hilbert Space as workspace to perform necessary linear operations. But those methods heavily rely on ideal kernel maps and they are lack of robustness when facing different SPD data, especially in the case of high-sparsity coding. Different from existing methods, we explore a new workspace called Symmetric Matrices Inner Product Space (SMIPS) for modeling robust DLSC of SPD matrices. SMIPS, which can be treated as the minimal linear expansion space of SPD manifold, is a linear space equipped with a pre-defined inner product so it supports linear operations. Modelling DLSC in SMIPS is more intuitive, and SPD data can preserve matrix form in SMIPS. Then, in this paper, a Riemannian Geometry Preserving (RGP) method based on graph-regularized is proposed to include the Riemannian geometric information of original SPD data, so that improves the discriminability of sparse codes, and a sparse coding algorithm is developed to acquire sparse codes of SPD matrices in SMIPS efficiently, which extend the feature-sign search to such matrix space. For dictionary learning, an overall learning strategy utilizing the closed form solution of RGP codes is proposed to speed up the iterative dictionary learning process. Experiments on several computer vision tasks show that our algorithm is more robust even with high-sparsity coding across different datasets and outperforms other comparative algorithms in terms of recognition accuracy and learning efficiency, which also demonstrates the validity of SMIPS as a workspace.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call