Abstract

Support Vector Machines (SVMs) have become one of the most common and popular machine learning tools for classification, pattern recognition, and object detection. The accelerating requirement for processing SVM yields the implementation of an SVM algorithm on the hardware. In general, the training phase for SVM is performed using software. The SVM algorithm is implemented on the hardware through the parameters generated from the training phase. Hence, training time and hardware overhead are two significant metrics to consider when improving SVM. In this paper, we propose a innovative model of SVM called Highly Parallel SVM (HPSVM) for binary classification. The HPSVM is capable of saving training time and hardware overhead while simultaneously maintaining good classification accuracy. The idea of the HPSVM is to perform the newly proposed Concurrent Gaussian Selection for picking significant training data to learn an ensemble of linear classifiers for approximation of the complicated classifier. By doing so, training time and hardware cost can be tremendously reduced. The experimental results show that, compared to the proposed parallel SVM, Ensemble of Exemplar-SVMs, the HPSVM achieves 3x training time reduction and reduces hardware cost by about 6x while slightly improving the classification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.