Abstract

Out-class sampling working together with in-class sampling is a popular strategy to train Support Vector Machine (SVM) classifier with imbalanced data sets. However, it may lead to some inconsistency because the sampling strategy and SVM work in different space. This paper presents a kernel-based over-sampling approach to overcome the drawback. The method first preprocesses the data using both in-class and out-class sampling to generate minority instances in the feature space, then the pre-images of the synthetic samples are found based on a distance relation between input space and feature space. Finally, these pre-images are appended to the original minority class data set to train a SVM. Experiments on real data sets indicate that compared with existing over-sampling technique, the samples generated by the proposed strategy have the higher quality. As a result, the effectiveness of classification by SVM with imbalanced data sets is improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.