Abstract

Linear Discriminant Analysis (LDA) is a well-known and important tool in pattern recognition with potential applications in many areas of research. The most famous and used formulation of LDA is that given by the Fisher-Rao criterion, where the problem reduces to a simple simultaneous diagonalization of two symmetric, positive-definite matrices, A and B; i.e. B^-1 AV = VA. Here, A defines the metric to be maximized, while B defines the metric to be minimized. However, when B has near-zero eigenvalues, the Fisher-Rao criterion gets dominated by these. While this works well when such small variances describe vectors where most of the discriminant information is, the results will be incorrect when these small variances are caused by noise. Knowing which of these near-zero values are to be used and which need to be eliminated is a challenging yet fundamental task in LDA. This paper presents a criterion for the selection of those vectors of B that are best for classification. The proposed solution is based on a simple factorization of B^-1 A that permits the re-ordering of the eigenvectors of B without the need to effect the end result. This allows us to readily eliminate the noisy vectors while keeping the most discriminant ones. A theoretical basis for these results is presented along with extensive experimental results to validate the claims.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.