Abstract

We describe and analyze a simple and effective two-step online boosting algorithm that allows us to utilize highly effective gradient descent-based methods developed for online SVM training without the need to fine-tune the kernel parameters, and we show its efficiency by several experiments. Our method is similar to AdaBoost in that it trains additional classifiers according to the weights provided by previously trained classifiers, but unlike AdaBoost, we utilize hinge-loss rather than exponential loss and modify algorithm for the online setting, allowing for varying number of classifiers. We show that our theoretical convergence bounds are similar to those of earlier algorithms, while allowing for greater flexibility. Our approach may also easily incorporate additional nonlinearity in form of Mercer kernels, although our experiments show that this is not necessary for most situations. The pre-training of the additional classifiers in our algorithms allows for greater accuracy while reducing the times associated with usual kernel-based approaches. We compare our algorithm to other online training algorithms, and we show, that for most cases with unknown kernel parameters, our algorithm outperforms other algorithms both in runtime and convergence speed.

Highlights

  • We exploit the similarity between the mathematical description of boosting and support vector machines to create a two-stage online boosting algorithm with variable number of classifiers, that can exhibit greater flexibility than kernel-based SVM while having smaller computational costs

  • Each phase of our training algorithm is trying to solve the primal formulation of SVM (3), using Pegasos algorithm, which has a convergence rate of O(R2/λ ), where R is a bound on the norm of input vector, and is desired accuracy

  • It is easy to see that, for the second phase R = T, T being the number of added classifiers, and each classifier producing output hi ∈ {−1; 1}, so the convergence rate slowly decreases as additional classifiers are added

Read more

Summary

Introduction

There has been an increasing amount of interest in the area of online learning methods Such methods are useful for setting where a limited amount of samples in being fed sequentially into a training system, and for system, where the amount of training data is too large to fit into memory. Several methods for online boosting and online support vector machine (SVM) training has been proposed. The online boosting algorithms, such as [1] or [2], are usually limited to a fixed number of classifiers, while the online SVM training methods employing Mercer kernels [3, 4] may not converge well in the cases where the inappropriate kernel was chosen. We exploit the similarity between the mathematical description of boosting and support vector machines to create a two-stage online boosting algorithm with variable number of classifiers, that can exhibit greater flexibility than kernel-based SVM while having smaller computational costs

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call