Abstract

In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2,1</sub> -norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call