Abstract

One effective technique that has recently been considered for solving classification problems is parametric $$\nu $$ -support vector regression. This method obtains a concurrent learning framework for both margin determination and function approximation and leads to a convex quadratic programming problem. In this paper we introduce a new idea that converts this problem into an unconstrained convex problem. Moreover, we propose an extension of Newton’s method for solving the unconstrained convex problem. We compare the accuracy and efficiency of our method with support vector machines and parametric $$\nu $$ -support vector regression methods. Experimental results on several UCI benchmark data sets indicate the high efficiency and accuracy of this method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call