Abstract

Developing a robust and sustainable system is an important problem in which deep learning models are used in real-world applications. Ensemble methods combine diverse models to improve performance and achieve robustness. The analysis of time series data requires dealing with continuously incoming instances; however, most ensemble models suffer when adapting to a change in data distribution. Therefore, we propose an on-line ensemble deep learning algorithm that aggregates deep learning models and adjusts the ensemble weight based on loss value in this study. We theoretically demonstrate that the ensemble weight converges to the limiting distribution, and, thus, minimizes the average total loss from a new regret measure based on adversarial assumption. We also present an overall framework that can be applied to analyze time series. In the experiments, we focused on the on-line phase, in which the ensemble models predict the binary class for the simulated data and the financial and non-financial real data. The proposed method outperformed other ensemble approaches. Moreover, our method was not only robust to the intentional attacks but also sustainable in data distribution changes. In the future, our algorithm can be extended to regression and multiclass classification problems.

Highlights

  • Ensemble methods have been developed to achieve robust and high performance in various tasks, such as image classification, on-line learning, financial data prediction, and clustering [1,2,3,4,5,6,7]

  • In this paper, we propose an on-line ensemble learning algorithm that changes the weight distribution of the ensemble model on the basis of loss value to adapt to a change in the properties of incoming instances

  • Our algorithm can be used as a general framework for aggregating deep learning models to analyze time series data because it is applicable to all cases that learn a model by minimizing the loss function

Read more

Summary

Introduction

Ensemble methods have been developed to achieve robust and high performance in various tasks, such as image classification, on-line learning, financial data prediction, and clustering [1,2,3,4,5,6,7]. Such methods aim to construct a group of models and aggregate the results of the models, where a high diversity of models is preferred. These two components of ensemble learning are appropriate for applying to on-line learning scenarios because they help to adapt the entire model to changing input data

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.