Abstract

The well-known MinOver algorithm is a slight modification of the perceptron algorithm and provides the maximum-margin classifier without a bias in linearly separable two-class classification problems. DoubleMinOver as an extension of MinOver, which now includes a bias, is introduced. An O(t(-1)) convergence is shown, where t is the number of learning steps. The computational effort per step increases only linearly with the number of patterns. In its formulation with kernels, selected training patterns have to be stored. A drawback of MinOver and DoubleMinOver is that this set of patterns does not consist of support vectors only. DoubleMaxMinOver, as an extension of DoubleMinOver, overcomes this drawback by selectively forgetting all nonsupport vectors after a finite number of training steps. It is shown how this iterative procedure that is still very similar to the perceptron algorithm can be extended to classification with soft margins and be used for training least squares support vector machines (SVMs). On benchmarks, the SoftDoubleMaxMinOver algorithm achieves the same performance as standard SVM software.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.