Abstract

AdaBoost is a successful machine learning algorithm used in a variety of fields nowadays. However, its performance is sensitive to the number of weak learners in the ensemble. Too few weak learners will result in underfitting to the training data-set and too many of them cause overfitting to the training data-set, both of which result in poor generalisation of the classifier on test data. The standard way to compute the number of weak learners that is optimal for a particular data-set is to use cross-validation; however, it is highly computationally expensive. In this paper, we propose an efficient method that does not require cross-validation or a separate validation set to determine the number of weak learners for use in AdaBoost. Our method is evaluated on eight different publicly available data-sets to demonstrate its efficacy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.