Abstract

Twin-KSVC (Xu et al. in Cognit Comput 5(4):580–588, 2013) is a novel multi-classifier, which is an extension of K-SVCR (Angulo et al. in Neurocomputing 55(12):57–77, 2003). Compared with K-SVCR, the Twin-KSVC has higher training speed. However, there are some drawbacks in classical Twin-KSVC. (a) Each pair of sub-classifiers in Twin-KSVC only implements empirical risk minimization, which makes generalization performance reduced. (b) Each pair of sub-classifiers in Twin-KSVC needs to calculate large-scale inverse matrices, which is intractable or even impossible in practical applications. (c) For the large-scale datasets, the classical Twin-KSVC doesn’t offer an appropriate training algorithm. (d) For nonlinear case, the classical Twin-KSVC has to construct additional primal problems based on the approximate kernel-generated surface. For the drawbacks of Twin-KSVC, we propose an improved version in this paper, called ITKSVC. First of all, we introduce regularization terms into each pair of sub-classifiers in Twin-KSVC, which makes each pair of sub-classifiers implement structural risk minimization. Further, we theoretically deduce the dual problems of each pair of sub-classifiers, which makes ITKSVC avoid calculating large-scale inverse matrices. In addition, to improve training speed of each pair of sub-classifiers in ITKSVC for the large-scale datasets, successive overrelaxation method is applied. Finally, the dual problems of each pair of sub-classifiers in ITKSVC can directly apply the kernel trick for nonlinear cases. The experimental results on several benchmark datasets indicate that, compared with Twin-KSVC, the proposed ITKSVC has better classification performance for large-scale datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.