Abstract
Support vector machines are a family of data analysis algorithms based on convex quadratic programming. We focus on their use for classification: in that case, the SVM algorithms work by maximizing the margin of a classifying hyperplane in a feature space. The feature space is handled by means of kernels if the problems are formulated in dual form. Random sampling techniques successfully used for similar problems are studied. The main contribution is a randomized algorithm for training SVMs, for which we can formally prove an upper bound on the expected running time that is quasilinear on the number of data points. To our knowledge, this is the first algorithm for training SVMs in dual formulation and with kernels for which such a quasilinear time bound has been formally proved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.