Abstract

Support vector machine (SVM) is known for its good generalization performance and wide application in various fields. Despite its success, the learning efficiency of SVM decreases significantly originating from the assumption that the number of training samples increases rapidly. Consequently, the traditional SVM with standard optimization methods faces challenges such as excessive memory requirements and slow training speed, especially for large-scale training sets. To address this issue, this paper draws inspiration from the fuzzy support vector machine (FSVM). Considering that each sample has varying contributions to the decision plane, we propose an effective SVM sample reduction method based on the fuzzy membership function (FMF). This method uses FMF to calculate the fuzzy membership of each training sample. Training samples with low fuzzy memberships are then deleted. Specifically, we propose SVM sample reduction algorithms based on class center distance, kernel target alignment, centered kernel alignment, slack factor, entropy, and bilateral weighted FMF, respectively. Comprehensive experiments on UCI and KEEL datasets demonstrate that our proposed algorithms outperform other comparative methods in terms of accuracy, F-measure, and hinge-loss measures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.