Abstract

Support Vector Machine (SVM) is one of the state-of-the-art tools for linear and nonlinear pattern classification. One of the design issues in SVM classifier is reducing the number of support vectors without compromising the classification accuracy. In this paper, a novel technique known as Diminishing Learning (DL) is proposed for an SVM based multi-class pattern recognition system. In this technique, a sequential classifier is proposed wherein the classes which require stringent boundaries are tested one by one and once the tests for these classes fail, the stringency of the classifier is increasingly relaxed. The effect of, the sequence in which the classes are trained and tested, on the recognition accuracy is also studied in this paper. The proposed technique is applied for SVM based isolated digit recognition system and is studied using speaker dependent TI46 database of isolated digits. Two feature extraction techniques, one using LPC and another using MFCC are applied to the speech from the above database and the features are mapped using SOFM. This in turn is used by the SVM classifier to evaluate the recognition accuracy with and without DL technique. Based on this study, it is found that the use of diminishing learning reduces the number of support vectors by 41.54% and 44.70% respectively for SVM classifier with LPC and MFCC feature inputs respectively. Recognition accuracies of 97% and 98% are achieved for SVM classifier with and without DL technique for LPC feature inputs respectively. Recognition accuracy of 100% is achieved for SVM with and without DL technique for MFCC feature inputs. The study confirms the effect of, the order in which the classes are trained and tested, on the recognition accuracy and for the TI46 database, about 2% to 4% increase in recognition accuracy is obtained by choosing the optimum order.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call