Abstract

Recently, One-class Support Vector Machine (OC-SVM) has been introduced to detect novel data or outliers. The key problem of training an OC-SVM is how to solve the constrained quadratic programming problem. The optimization process suffers from the problem of memory and time consuming. We present a new method to efficiently train the OC-SVM. Based on the random sampling lemma, the training dataset was firstly decomposed into subsets and each OC-SVM of subset was trained by Sequential Minimal Optimization (SMO). The combining lemmas of support vectors and outliers of OC-SVM were deduced. A new decision boundary was merged by decomposing and combining lemmas (DC). Experimental results demonstrate that the proposed method not only can handle larger scale data sets than standard SMO, but also outperforms SMO in time consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call