Abstract

Support vector machine is a supervised learning algorithm, which is recommended for classification and nonlinear function approaches. Support vector machines require remarkable amount of memory with time consuming process for large data sets in the training process. The main reason for this is the solving a constrained quadratic programming problem within the algorithm. In this paper, we proposed three approaches for identifying the non-critical points in training set and remove these non-critical points from the original training set for speeding up the training process of support vector machine. For this purpose, we used principal component analysis, Mahalanobis distance and Euclidean distance based measurements for the elimination of non-critical training instances in the training set. We compared the proposed methods in terms of computational time and classification accuracy between each other and conventional support vector machine. Our experimental results show that principal component analysis and Mahalanobis distance based proposed methods have positive effects on computational time without degrading the classification results than the Euclidean distance based proposed method and conventional support vector machine.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.