Abstract

Linear discrimination analysis (LDA) is one of the most popular feature extraction and classifier design techniques. It maximizes the Fisher-ratio between between-class scatter matrix and within-class scatter matrix under a linear transformation, and the transformation is composed of the generalized eigenvectors of them. However, Fisher criterion itself can not decide the optimum norm of transformation vectors for classification. In this paper, we show that actually the norm of the transformation vectors has strong influence on classification performance, and we propose a novel method to estimate the optimum norm of LDA under the ranking loss, re-weighting LDA. On artificial data and real databases, the experiments demonstrate the proposed method can effectively improve the performance of LDA classifiers. And the algorithm can also be applied to other LDA variants such as non parametric discriminant analysis (NDA) to improve theirs performance further.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.