Abstract
Support vector data description (SVDD) is a well-known kernel-based one-class classification method that exhibits intrinsic regularization ability and robustness versus low numbers of high-dimensional samples. However, the efficiency of SVDD is limited by the cubic time complexity. To solve this problem, this paper first investigates the effect of selecting a reduced subset as the training set of SVDD, while guaranteeing the classification quality. To this end, a new heuristic sample condensed rule, termed HSC, is proposed to accurately identify those potential support vectors that characterize the classification boundary. HSC can consider both the spatial distribution and local density features of training samples, and focus on selecting samples very close to the decision boundary. When dealing with the local density computation, we introduce the idea of K nearest neighbors (KNN) to examine the density of samples in the neighbors of the object to be classified. Finally, a condensed but informative subset obtained by HSC will be applied to train SVDD breezily. The experimental results show that HSC-based SVDD sensibly improves over conventional SVDD, in terms of the size of the training set while guaranteeing a comparable classification quality. In addition, it is competitive over other improved SVDD classifiers in terms of training and testing time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.