Abstract

The learning ability and generalizing performance of the support vector machine (SVM) mainly relies on the reasonable selection of super-parameters. When the scale of the training sample set is large and the parameter space is huge, the existing popular super-parameter selection methods are impractical due to high computational complexity. In this paper, a novel super-parameter selection method for SVM with a Gaussian kernel is proposed, which can be divided into the following two stages. The first one is choosing the kernel parameter to ensure a sufficiently large number of potential support vectors retained in the training sample set. The second one is screening out outliers from the training sample set by assigning a special value to the penalty factor, and training out the optimal penalty factor from the remained training sample set without outliers. The whole process of super-parameter selection only needs two train-validate cycles. Therefore, the computational complexity of our method is low. The comparative experimental results concerning 8 benchmark datasets show that our method possesses high classification accuracy and desirable training time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.