Abstract

One-class support vector machine (OCSVM) has demonstrated superior performance in one-class classification problems. However, its training is impractical for large-scale datasets owing to high computational complexity with respect to the number of training instances. In this study, we propose an approximate training method based on the concept of expected margin to obtain a model identical to full training with reduced computational burden. The proposed method selects prospective support vectors using multiple OCSVM models trained on small bootstrap samples of an original dataset. The final OCSVM model is trained using only the selected instances. The proposed method is not only simple and straightforward but also considerably effective in improving the training efficiency of OCSVM. Preliminary experiments are conducted on large-scale benchmark datasets to examine the effectiveness of the proposed method in terms of approximation performance and computational efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.