Abstract

Since large amounts of labeled high-dimensional data needed to be processed, supervised feature learning has become an important and challenging problem in machine learning. Conventional supervised methods often adopt l 2-norm loss function, which is sensitive to the outliers. However, real world data always contain lots of outliers that make traditional supervised methods fail to achieve the optimal performance. In addition, these methods can not reconstruct the original complex structured data well, since the dimensions of their learned projection matrices are often limited to the number of classes and are sub-optimal. To address these challenges, we propose a novel robust discriminative feature learning (RDFL) method via calibrated data reconstruction and sparse low-rank model. Specifically, RDFL preserves the discriminant information and simultaneously reconstructs the complex low-rank structure by minimizing joint l 2,1-norm reconstruction error and within-class distance. To solve the proposed non-smooth problem, we derive an efficient optimization algorithm to soften the contributions of outliers. Meanwhile, we adopt the general power iteration method (GPIM) to accelerate our algorithm to make it scalable to large scale problem and theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results present that our proposed RDFL outperforms other compared methods in most cases and significantly improve the robust performance to noise and outliers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call