Abstract

Linear discriminant analysis (LDA) is one of the most popular methods to extract discriminative features because it is simple and powerful. However, LDA fails to learn a discriminative subspace in some cases. This study deals with a problem of LDA, the so-called class separation (CS) problem, which means that some classes located close to each other in the original input space tend to overlap in a learned subspace. This problem can also happen in a heteroscedastic extension of LDA, the oriented discriminant analysis (ODA). To alleviate the problem, we propose two methods to maximize the generalized mean instead of the arithmetic mean in the objective functions. Experimental results show that the proposed methods can obtain better discriminative subspaces than LDA, ODA, and other alternatives designed to solve the CS problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call