Abstract

Support Vector Machines are a family of algorithms for the analysis of data based on convex Quadratic Programming. We derive randomized algorithms for training SVMs, based on a variation of Random Sampling Techniques; these have been successfully used for similar problems. We formally prove an upper bound on the expected running time which is quasilinear with respect to the number of data points and polynomial with respect to the other parameters, i.e., the number of attributes and the inverse of a chosen soft margin parameter. [This is the combined journal version of the conference papers (Balcazar, J.L. et al. in Proceedings of 12th International Conference on Algorithmic Learning Theory (ALT’01), pp. 119–134, [2001]; Balcazar, J.L. et al. in Proceedings of First IEEE International Conference on Data Mining (ICDM’01), pp. 43–50, [2001]; and Balcazar, J.L. et al. in Proceedings of SIAM Workshop in Discrete Mathematics and Data Mining, pp. 19–29, [2002]).]

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.