Abstract

The generalization error bound of the support vector machine (SVM) depends on the ratio of the radius and margin. However, conventional SVM only considers the maximization of the margin but ignores the minimization of the radius, which restricts its performance when applied to joint learning of feature transformation and the SVM classifier. Although several approaches have been proposed to integrate the radius and margin information, most of them either require the form of the transformation matrix to be diagonal, or are nonconvex and computationally expensive. In this paper, we suggest a novel approximation for the radius of the minimum enclosing ball in feature space, and then propose a convex radius-margin-based SVM model for joint learning of feature transformation and the SVM classifier, i.e., F-SVM. A generalized block coordinate descent method is adopted to solve the F-SVM model, where the feature transformation is updated via the gradient descent and the classifier is updated by employing the existing SVM solver. By incorporating with kernel principal component analysis, F-SVM is further extended for joint learning of nonlinear transformation and the classifier. F-SVM can also be incorporated with deep convolutional networks to improve image classification performance. Experiments on the UCI, LFW, MNIST, CIFAR-10, CIFAR-100, and Caltech101 data sets demonstrate the effectiveness of F-SVM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call