Abstract

In the real-life, time-series data comprise a complicated pattern, hence it may be challenging to increase prediction accuracy rates by using machine learning and conventional statistical methods as single learners. This research outlines and investigates the Stacking Multi-Learning Ensemble (SMLE) model for time series prediction problem over various horizons with a focus on the forecasts accuracy, directions hit-rate, and the average growth rate of total oil demand. This investigation presents a flexible ensemble framework in light of blend heterogeneous models for demonstrating and forecasting nonlinear time series. The proposed SMLE model combines support vector regression (SVR), backpropagation neural network (BPNN), and linear regression (LR) learners, the ensemble architecture consists of four phases: generation, pruning, integration, and ensemble prediction task. We have conducted an empirical study to evaluate and compare the performance of SMLE using Global Oil Consumption (GOC). Thus, the assessment of the proposed model was conducted at single and multistep horizon prediction using unique benchmark techniques. The final results reveal that the proposed SMLE model outperforms all the other benchmark methods listed in this study at various levels such as error rate, similarity, and directional accuracy by 0.74%, 0.020%, and 91.24%, respectively. Therefore, this study demonstrates that the ensemble model is an extremely encouraging methodology for complex time series forecasting.

Highlights

  • In Machine Learning (ML), ensemble methods combine various learners to calculate prediction based on constituent learning algorithms [1]

  • The output of 10-fold cross-validation tests run on the initial training were used to determine whether each model was sufficient for Oil Consumption (OC) data to make the forecasting results more stable

  • It is evident that the results obtained according to the support vector regression (SVR) method for the 52 known years (1965–2016) were close to the actual ones and comparable to those produced by the backpropagation neural network (BPNN) and LR models

Read more

Summary

Introduction

In Machine Learning (ML), ensemble methods combine various learners to calculate prediction based on constituent learning algorithms [1]. The boosting method, which builds an ensemble by adding new instances to emphasize misclassified cases, yields competitive performance for time series forecasting [4]. As the most generally utilized usage of boosting, Ada-Boost [5] has been compared with other ML algorithms such as support vector machines (SVM) [6] and combined with this algorithm to enhance the forecasting performance [7]. Stacking [8] is an instance of EL multiple algorithms It combines the yield which is produced by various base learners in the first level. By utilizing a meta-learner, it tries to combine the outcomes from these base learners in an ideal method to augment the generalization ability [9]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.