Abstract

Helmbold and Schapire gave an on-line prediction algorithm that, when given an unpruned decision tree, produces predictions not much worse than the predictions made by the best pruning of the given decision tree. In this paper, we give two new on-line algorithms. The first algorithm is based on the observation that finding the best pruning can be efficiently solved by a dynamic programming in the “batch” setting where all the data to be predicted are given in advance. This algorithm works well for a wide class of loss functions, whereas the one given by Helmbold and Schapire is only described for the absolute loss function. Moreover, the algorithm given in this paper is so simple and general that it could be applied to many other on-line optimization problems solved by dynamic programming. We also explore the second algorithm that is competitive not only with the best pruning but also with the best prediction values which are associated with nodes in the decision tree. In this setting, a greatly simplified algorithm is given for the absolute loss function. It can be easily generalized to the case where, instead of using decision trees, data are classified in some arbitrarily fixed manner.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.