Abstract
Problem statement: To accept the inputs as spoken word utterances uttered by various speakers, recognize the corresponding spoken words and initiate action pertaining to that word. Approach: A novel Linear-Polynomial (LP) Kernel function was used to construct support vector machines to classify the spoken word utterances. The support vector machines were constructed using various kernel functions. The use of well known one-versus-one approach considered with voting algorithm. Results: The empirical results compared by implementing various kernel functions such as linear kernel function, polynomial kernel function and LP kernel functions to construct different SVMs. Conclusion: The generalization performances based on the One-versus- One approach for speech recognition were compared with the novel LP kernel function. The SVMs using LP kernel function classifies the spoken utterances very efficiently as compared to other kernel functions. The performance of the novel LP kernel function was outstanding as compared to other kernel functions.
Highlights
From last several years, the speech recognition research playing a leading role in more number of applications
Many new techniques emerged including Modified Fuzzy-Hyper sphere Neural Networks (MFHNN), Neural Networks (Doye et al, 2002; Solaimani, 2009), Hidden Markov Models (Ping et al, 2009), Bayesian Networks (Mansouri et al, 2011) and Dynamic Time Warping decade to decade to increase the performance of the speech recognition systems but Hidden Markov Model (HMM) (Rabiner and Juang, 1993; Doye et al, 2002) is among the most successful state of art tools widely used but still speech recognition systems are far away to achieve high-performance as well as accuracy
In linear methods inner products called dot products are considered for generating the optimal separating hyperplane for classifying the two classes where as in non linear approach dot products are replaced by kernel functions (Burges, 1998; Scholkopf and Smola, 2002) to construct the optimal separating hyperplane
Summary
The speech recognition research playing a leading role in more number of applications. HMM requires discriminative approaches to discriminate the speech samples. The Support Vector Machine (SVM) (Clarkson and Moreno, 1999; Scholkopf and Smola, 2002) is emerged as a new machine learning technique for pattern classification. The SVMs are based on the discriminative approach which discriminates the patterns by finding the global minima. The linear and nonlinear approaches are used to construct the SVMs. In linear methods inner products called dot products are considered for generating the optimal separating hyperplane for classifying the two classes where as in non linear approach dot products are replaced by kernel functions (Burges, 1998; Scholkopf and Smola, 2002) to construct the optimal separating hyperplane
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.