Abstract

This work describes a framework for solving support vector machine with kernel (SVMK). Recently, it has been proved that the use of non-smooth loss function for supervised learning problem gives more efficient results [1]. This gives the idea of solving the SVMK problem based on hinge loss function. However, the hinge loss function is non-differentiable (we can’t use the standard optimization methods to minimize the empirical risk). To overcome this difficulty, a special smoothing technique for the hinge loss is proposed. Thus, the obtained smooth problem combined with Tikhonov regularization is solved using a stochastic gradient descent method. Finally, some numerical experiments on academic and real-life datasets are presented to show the efficiency of the proposed approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.