Abstract

In this brief, we propose a new max-margin-based discriminative feature learning method. In particular, we aim at learning a low-dimensional feature representation, so as to maximize the global margin of the data and make the samples from the same class as close as possible. In order to enhance the robustness to noise, we leverage a regularization term to make the transformation matrix sparse in rows. In addition, we further learn and leverage the correlations among multiple categories for assisting in learning discriminative features. The experimental results demonstrate the power of the proposed method against the related state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call