Abstract

Boosting is a learning scheme that combines weak learners to produce a strong composite learner, with the underlying intuition that one can obtain accurate learner by combining "rough" ones. This paper aims at developing a new boosting strategy, called rescaled boosting (RBoosting), to accelerate the numerical convergence rate and, consequently, improve learning performances of the original boosting. Our studies show that RBoosting possesses the almost optimal numerical convergence rate in the sense that, up to a logarithmic factor, it can reach the minimax nonlinear approximation rate. We then use RBoosting to tackle classification problems and deduce corresponding statistical consistency and tight generalization error estimates. A series of theoretical and experimental results shows that RBoosting outperforms boosting in terms of generalization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.