Abstract

Rooted in a basic hypothesis that a data matrix is strictly drawn from some independent subspaces, the low-rank representation (LRR) model and its variations have been successfully applied in various image classification tasks. However, this hypothesis is very strict to the LRR model as it cannot always be guaranteed in real images. Moreover, the hypothesis also prevents the sub-dictionaries of different subspaces from collaboratively representing an image. Fortunately, in supervised image classification, low-rank signal can be extracted from the independent label subspaces (ILS) instead of the independent image subspaces (IIS). Therefore, this paper proposes a projective low-rank representation (PLR) model by directly training a projective function to approximate the LRR derived from the labels. To the best of our knowledge, PLR is the first attempt to use the ILS hypothesis to relax the rigorous IIS hypothesis in the LRR models. We further prove a low-rank effect that the representations learned by PLR have high intraclass similarities and large interclass differences, which are beneficial to the classification tasks. The effectiveness of our proposed approach is validated by the experimental results on three databases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call