Abstract

This paper gives a deep investigation into AdaBoost algorithm, which is used to boost the performance of any given learning algorithm. Within AdaBoost, weak learners are crucial and primitive parts of the algorithm. Since weak learners are required to train with weights, two types of weak learners: Artificial Neural Network weak learner and naive Bayes weak learner are designed. The results show AdaBoost by naive Bayes weak learners is superior to Artificial Neural Network weak learners, it shares the same generalisation ability with Support Vector Machine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call