Self-representation methods, such as low-rank representation (LRR), sparse subspace clustering (SSC) and their variants may generate negative coding coefficients since there is no explicit nonnegative constraint. These negative coefficients lack physical meaning. To be specific, it is unreasonable to allow a query sample encoded by heterogeneous samples to cancel each other out with subtractions. In this paper, we propose a novel model named Laplacian regularized nonnegative representation (LapNR). The new model improves its physical interpretability by ensuring that the query sample should be approximated from homogeneous samples and irrelevant to heterogeneous ones. More importantly, it captures the geometric information of input data by imposing the graph Laplacian to the nonnegative representations. As a result, the representation matrix generated by our LapNR model becomes sparse and discriminative. Based on the alternating direction method of multipliers (ADMM), an efficient optimization procedure is developed for LapNR. The extensive experiments on clustering and dimensionality reduction tasks show the effectiveness and efficiency of our LapNR.
Read full abstract