Abstract

Linear discriminant analysis (LDA) is designed to seek a linear transformation that projects a data set into a lower-dimensional feature space while retaining geometrical class separability. However, LDA cannot always guarantee better classification accuracy. One of the possible reasons lies in that its formulation is not directly associated with the classification error rate, so that it is not necessarily suited for the allocation rule governed by a given classifier, such as that employed in automatic speech recognition (ASR). In this paper, we extend the classical LDA by leveraging the relationship between the empirical classification error rate and the Mahalanobis distance for each respective class pair, and modify the original between-class scatter from a measure of the squared Euclidean distance to the pairwise empirical classification accuracy for each class pair, while preserving the lightweight solvability and taking no distributional assumption, just as what LDA does. Experimental results seem to demonstrate that our approach yields moderate improvements over LDA on the large vocabulary continuous speech recognition (LVCSR) task.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.