Abstract

Coronavirus disease 2019 (Covid-19) is a contagious pandemic illness characterized by severe acute respiratory syndrome. The daily rise of Covid-19 instances and fatalities has resulted in worldwide lockdowns, quarantines and social distancing. Researchers have been working incredibly to develop precisely focused strategies to warfare the Covid-19 pandemic. This study aims to develop a cyclical learning rate optimized stacked generalization computational models (CLR-SGCM) for predicting Covid-19 pandemic outbreaks. Stacked generalization framework performs hierarchical two-phase prediction. In the first phase, deep learning models namely Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) and statistical model Auto Regressive Integrated Moving Average (ARIMA) are used as sub models to create pooled datasets (PDS). Cyclical learning rate (CLR) optimizer is used to enhance learning rate of ensemble deep learning models namely LSTM and GRU. In the second phase, meta learner is trained on dataset PDS using four different regression algorithms such as linear regression, polynomial regression, lasso regression and ridge regression to perform the final predictions. Time series data from India, Brazil, and the United States were utilized to forecast the Covid-19 pandemic outbreak. According to experimental finding, the presented stacking ensemble model outpaces the individual learners in terms of accuracy and error rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.