Abstract

Support vector machine (SVM) has become a popular classification tool but one of its disadvantages is large memory requirement and computation time when dealing with large datasets. Parallel methods have been proposed to speed up the process of training SVM. An improved cascade SVM training algorithm is proposed, in which multiple SVM classifiers are applied. The support vectors are obtained by feeding back in a crossed way, alternating to avoid the problem that the learning results are subject to the distribution state of the data samples in different subsets. The experiment results on UCI dataset show that this parallel SVM training algorithm is efficient and has more satisfying accuracy compared with standard cascade SVM algorithm in classification precision

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.