Abstract

Recently, there are many works on discriminant analysis, which promote the robustness of models against outliers by using L1- or L2,1-norm as the distance metric. However, both of their robustness and discriminant power are limited. In this article, we present a new robust discriminant subspace (RDS) learning method for feature extraction, with an objective function formulated in a different form. To guarantee the subspace to be robust and discriminative, we measure the within-class distances based on [Formula: see text]-norm and use [Formula: see text]-norm to measure the between-class distances. This also makes our method include rotational invariance. Since the proposed model involves both [Formula: see text]-norm maximization and [Formula: see text]-norm minimization, it is very challenging to solve. To address this problem, we present an efficient nongreedy iterative algorithm. Besides, motivated by trace ratio criterion, a mechanism of automatically balancing the contributions of different terms in our objective is found. RDS is very flexible, as it can be extended to other existing feature extraction techniques. An in-depth theoretical analysis of the algorithm's convergence is presented in this article. Experiments are conducted on several typical databases for image classification, and the promising results indicate the effectiveness of RDS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call