Abstract
Ensemble learners and deep neural networks are state-of-the-art schemes for classification applications. However, deep networks suffer from complex structure, need large amount of samples and also require plenty of time to be converged. In contrast, ensemble learners (especially AdaBoost) are fast to be trained, can work with small and large datasets and also benefit strong mathematical background. In this paper, we have developed a new orthogonal version of AdaBoost, termed as ORBoost, in order to desensitize its performance against noisy samples as well as exploiting low number of weak learners. In ORBoost, after reweighting the distribution of each learner, the Gram-Schmidt rule updates those weights to make a new samples’ distribution to be orthogonal to the former distributions. In contrast in AdaBoost, there is no orthogonality constraint even between two successive weak learners while there is a similarity between the distributions of samples in different learners. To assess the performance of ORBoost, 16 UCI-Repository datasets along with six big datasets are deployed. The performance of ORBoost is compared to the standard AdaBoost, LogitBoost and AveBoost-II over the selected datasets. The achieved results support the significant superiority of ORBoost to the counterparts in terms of accuracy, robustness, number of exploited weak learners and generalization on most of the datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.