Abstract

Linear Discriminant Analysis (LDA) is a standard tool for classification and dimension reduction in many applications. However, the problem of high dimension is still a great challenge for the classical LDA. In this paper we consider the supervised pattern classification in the high dimensional setting, in which the number of features is much larger than the number of observations and present a novel approach to the sparse optimal scoring problem using the zero-norm. The difficulty in treating the zero-norm is overcome by using appropriate continuous approximations such that the resulting problems are solved by alternating schemes based on DC (Difference of Convex functions) programming and DCA (DC Algorithms). The experimental results on both simulated and real datasets show the efficiency of the proposed algorithms compared to the five state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call