Abstract

There are a large number of target data and fewer outlier data in the problem of one-class classification. The sample reduction methods used for two-class or multi-class classification cannot be applied to this problem. This paper presents a novel method to reduce the scale of training set for one-class classification. This method only preserves the sample locating near the data distribution which may become support vector. If summing the cosine value of the angle which is between the difference between the sample and the neighbor, and the difference between the sample and mean of neighbors, the cosine sum will be close to k (k is the number of the neighbors), while the sample locating near the boundary; close to 0, while the sample locating within the data distribution. Experimental results demonstrate that the proposed method can reduce the scale of training set to get faster training speed and use less support vectors in the model of one-class SVMs, and the performance does not deteriorate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.