Abstract

Incremental Learning (IL) is an exciting paradigm that deals with classification problems based on a streaming or sequential data. IL aims to achieve the same level of prediction accuracy on streaming data as that of a batch learning model that has the opportunity to see the entire data at once. The performance of the traditional algorithms that can learn the streaming data is nowhere comparable to that of batch learning algorithms. Reducing the regret of IL is a challenging task in real-world applications. Hence developing an innovative algorithm to improve the ILs performance is a necessity. In this paper, we propose a multi-tier stacked ensemble (MTSE) algorithm that uses incremental learners as the base classifiers. This novel algorithm uses the incremental learners to predict the results that get combined by the combination schemes in the next tier. The meta-learning in the next tier generalizes the output from the combination schemes to give the final prediction. We tested the MTSE with three data sets from the UCI machine learning repository. The results reveal that MTSE is superior in performance over the SE learning.

Highlights

  • A streaming data emanates continuously from one or more data sources at high speed [1], [2]

  • The analysis reveals that the percentage of training data is not having any major impact on the performance of multitier stacked ensemble (MTSE)

  • MTSE ensures that the accuracy keeps increasing in every tier

Read more

Summary

INTRODUCTION

A streaming data emanates continuously from one or more data sources at high speed [1], [2]. The primary challenge in reducing the regret is that the endpoint of incremental learning keeps moving with every arrival of a new set of data [21]. To reduce the regret of incremental learning, the algorithm needs to increase the accuracy of the models and adapt to the concept drift [27]. This study goes one step further by using multiple ensemble algorithms and generalizing the predictions from the ensemble algorithms using another classifier It uses an MTSE algorithm for incremental learning. Each block of data is processed only once Use a fixed amount of memory irrespective of the size of the data set Ability to stop the algorithm at any time and get the best prediction In this study, MTSE simulates the steady flow of streaming data from a large static dataset.

RELATED WORK
MATHEMATICAL MODEL
21: End if
24: Weighted Majority Voting: 25
38: Calculate the Accuracy of the ensemble classifier
1: Input: The labels predicted by the base classifiers
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call