Abstract
Despite considerable success in discovering the knowledge, conventional machine learning algorithms may defeat to achieve satisfying performances during the transaction with imbalanced, complex, noise, and high dimensional data. In this context, it is substantial to think about efficiently building an adequate knowledge and mining model. Ensemble learning aims to consolidate the classical machine learning (ML) algorithms, data modeling, and data mining into a unified framework. Text categorization is a critical application that uses the unified ensemble learning framework to detect a new article's class. This paper develops a two-layer stacking ensemble model containing different ML algorithms. Since, stacking model consist of stacked layers and each layer built with multiple ML algorithms we constructed the first layer of our stacking model with three ML algorithms (Multinominal Naïve Bayes (MNB, logistic regression (LR), and k-Nearest Neighbor (k-NN)) classifiers, while the second layer applies a random forest classification algorithm. The proposed stacking ensemble model is compared with the classical ML algorithms MNB, LR, and k-NN) in accuracy and error measure. The result shows that using the stacking model outperforms better than MNB and k-NN algorithms with Accuracy reached 89.72% and 89.75 %, respectively. While using LR, the Accuracy equals 91.5%, which is closed to the result of the proposed model, which equals 91.66%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.