Abstract
It is well known that linear discriminant analysis (LDA) works well and is asymptotically optimal under fixed-p-large-n situations. But Bickel and Levina (2004) showed that the LDA is as bad as random guessing when p > n. This article studies the sparse discriminant analysis via Dantzig penalized least squares. Our method avoids estimating the high-dimensional covariance matrix and does not need the sparsity assumption on the inverse of the covariance matrix. We show that the new discriminant analysis is asymptotically optimal theoretically. Simulation and real data studies show that the classifier performs better than the existing sparse methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.