Abstract

One-class support vector machine (OCSVM) is an important tool in machine learning and has been extensively used for one-class classification problems. The traditional OCSVM solves the primal problem by solving the dual problem, which is a quadratic programming problem. However, the computation of the quadratic programming is cubic and the storage complexity is quadratic with problem scale, so it is inefficient for training large-scale problems. In this paper, we propose to train OCSVM in primal space directly. Unfortunately, owing to the non-differentiability of hinge loss used in OCSVM, the OCSVM cannot be solved by the gradient-based optimization method which is first-order method that converges fast. On the other hand, the hinge loss is unbounded which makes the OCSVM less robust to outliers. The outliers will make the decision boundary severely deviate from the optimal hyperplane. To overcome the drawbacks, a huberized truncated loss function which is a nonconvex differentiable function is proposed to improve the robustness of the OCSVM. The huberized truncated loss function is insensitive to outliers as a substitute for hinge loss in traditional OCSVM. In contrast to traditional OCSVM, the primal objective function of robust OCSVM is differentiable. Considering the non-convexity of the optimization problem, we employ an accelerated proximal gradient algorithm to solve the robust OCSVM in the primal space. The numerical experiments on benchmark datasets and handwritten digit datasets show that the proposed method not only improves the robustness of the OCSVM , but also can reduce the computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call