Neural Network Approaches to Temporal Pattern Recognition: Applications in Demand Forecasting and Predictive Analytics

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Temporal pattern recognition has become increasingly critical for predictive analytics in various domains, particularly in demand forecasting where accurate predictions directly impact business operations and profitability. Neural network (NN) architectures have demonstrated remarkable capabilities in capturing complex temporal dependencies within sequential data, outperforming traditional statistical methods in numerous applications. This review examines the evolution and application of neural network approaches specifically designed for temporal pattern recognition, with emphasis on their utilization in demand forecasting and predictive analytics. The paper provides a comprehensive analysis of recurrent neural networks (RNNs), long short-term memory (LSTM) networks, gated recurrent units (GRUs), convolutional neural networks (CNNs), and transformer-based architectures in the context of time series forecasting. Furthermore, this review explores the integration of attention mechanisms, the emergence of spatiotemporal graph neural networks (STGNNs), and hybrid model architectures that combine multiple approaches to enhance forecasting accuracy. The evaluation metrics commonly employed to assess model performance, including mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE), are discussed alongside benchmark datasets utilized in the field. Through systematic examination of recent literature spanning from 2019 to 2025, this review identifies key architectural innovations, practical applications in retail and supply chain management, and emerging trends that define the current state of temporal pattern recognition. The findings reveal that while transformer-based models have gained significant attention for long-sequence forecasting, simpler linear architectures and hybrid approaches often demonstrate competitive or superior performance depending on dataset characteristics and application requirements. This comprehensive review serves as a foundation for researchers and practitioners seeking to understand the landscape of neural network methodologies for temporal pattern recognition and their practical deployment in demand forecasting systems.

Similar Papers
  • Research Article
  • Cite Count Icon 54
  • 10.3934/mbe.2021022
Artificial Intelligence based accurately load forecasting system to forecast short and medium-term load demands.
  • Dec 14, 2020
  • Mathematical Biosciences and Engineering
  • Faisal Mehmood Butt + 3 more

An efficient management and better scheduling by the power companies are of great significance for accurate electrical load forecasting. There exists a high level of uncertainties in the load time series, which is challenging to make the accurate short-term load forecast (STLF), medium-term load forecast (MTLF), and long-term load forecast (LTLF). To extract the local trends and to capture the same patterns of short, and medium forecasting time series, we proposed long short-term memory (LSTM), Multilayer perceptron, and convolutional neural network (CNN) to learn the relationship in the time series. These models are proposed to improve the forecasting accuracy. The models were tested based on the real-world case by conducting detailed experiments to validate their stability and practicality. The performance was measured in terms of squared error, Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE). To predict the next 24 hours ahead load forecasting, the lowest prediction error was obtained using LSTM with R2 (0.5160), MLP with MAPE (4.97), MAE (104.33) and RMSE (133.92). To predict the next 72 hours ahead of load forecasting, the lowest prediction error was obtained using LSTM with R2 (0.7153), MPL with MAPE (7.04), MAE (125.92), RMSE (188.33). Likewise, to predict the next one week ahead load forecasting, the lowest error was obtained using CNN with R2 (0.7616), MLP with MAPE (6.162), MAE (103.156), RMSE (150.81). Moreover, to predict the next one-month load forecasting, the lowest prediction error was obtained using CNN with R2 (0.820), MLP with MAPE (5.18), LSTM with MAE (75.12) and RMSE (109.197). The results reveal that proposed methods achieved better and stable performance for predicting the short, and medium-term load forecasting. The findings of the STLF indicate that the proposed model can be better implemented for local system planning and dispatch, while it will be more efficient for MTLF in better scheduling and maintenance operations.

  • Preprint Article
  • Cite Count Icon 2
  • 10.5194/egusphere-egu21-9572
Application of deep learning methods for urban water demand forecast modelling
  • Mar 4, 2021
  • Anjana G Rajakumar + 2 more

<p>Water demand predictions forms an integral part of sustainable management practices for water supply systems. Demand prediction models aides in water system maintenance, expansions, daily operational planning and in the development of an efficient decision support system based on predictive analytics. In recent years, it has also found wide application in real-time control and operation of water systems as well. However, short term water demand forecasting is a challenging problem owing to the frequent variations present in the urban water demand patterns. There are numerous methods available in literature that deals with water demand forecasting. These methods can be roughly classified into statistical and machine learning methods. The application of deep learning methods for forecasting water demands is an upcoming research area that has found immense traction due to its ability to provide accurate and scalable models. But there are only a few works which compare and review these methods when applied to a water demand dataset. Hence, the main objective of this work is the application of different commonly used deep learning methods for development of a short-term water demand forecast model for a real-world dataset. The algorithms studied in this work are (i) Multi-Layer Perceptron (MLP) (ii) Gated Recurrent Unit (GRU) (iii) Long Short-Term Memory (LSTM) (iv) Convolutional Neural Networks (CNN) and (v) the hybrid algorithm CNN-LSTM. Optimal supervised learning framework required for forecasting the one day ahead water demand for the study area is also identified. The dataset used in this study is from Hillsborough County, Florida, US. The water demand data was available for a duration of 10 months and the data frequency is about once per hour. These algorithms were evaluated based on the (1) Mean Absolute Percentage Error (MAPE) and (ii) Root Mean Squared Error (RMSE) values. Visual comparison of the predicted and true demand plots was also employed to check the prediction accuracy. It was observed that, the RMSE and MAPE values were minimal for the supervised learning framework that used the previous 24-hour data as input. Also, with respect to the forecast accuracy, CNN-LSTM performed better than the other methods for demand forecast, followed by MLP. MAPE values for the developed deep learning models ranged from 5% to 25%. The quantity, frequency and quality of data was also found to have substantial impact on the accuracy of the forecast models developed. In the CNN-LSTM based forecast model, the CNN component was found to effectively extract the inherent characteristics of historical water consumption data such as the trend and seasonality, while the LSTM part was able to reflect on the long-term historical process and future trend. Thus, its water demand prediction accuracy was improved compared to the other methods such as GRU, MLP, CNN and LSTM.</p>

  • Research Article
  • Cite Count Icon 16
  • 10.1016/j.suscom.2023.100869
A CNN encoder decoder LSTM model for sustainable wind power predictive analytics
  • Apr 1, 2023
  • Sustainable Computing: Informatics and Systems
  • Sherry Garg + 1 more

A CNN encoder decoder LSTM model for sustainable wind power predictive analytics

  • Research Article
  • Cite Count Icon 5
  • 10.24084/repqj21.226
Dynamic energy prices for residential users based on Deep Learning prediction models of consumption and renewable generation
  • Dec 28, 2023
  • RE&PQJ
  • J Cano-Martínez + 3 more

Dynamic energy prices for residential users based on Deep Learning prediction models of consumption and renewable generation

  • Research Article
  • Cite Count Icon 1
  • 10.15282/daam.v4i2.10195
Predicting Bitcoin and Ethereum prices using long short-term memory and gated recurrent unit
  • Sep 30, 2023
  • Data Analytics and Applied Mathematics (DAAM)
  • Muhammad Haziq Abdul Hadi + 2 more

Predicting future prices of cryptocurrencies, including Bitcoin and Ethereum, presents a formidable challenge owing to their inherent volatility. This study applies Long Short-Term Memory (LSTM), a well-established recurrent neural network for time series forecasting, to predict Bitcoin and Ethereum values. Historical price data for both cryptocurrencies, sourced from Yahoo Finance, serves as the basis for analysis. The dataset undergoes an 80% training and 20% testing partition. Subsequently, LSTM models are developed and trained on both datasets. In parallel, the gated recurrent unit (GRU), recognized as an advanced variant of the LSTM model, is explored for comparative purposes. Performance evaluation utilizes fundamental metrics, including root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The results reveal an intriguing trend: both models exhibit superior performance when applied to the Ethereum dataset compared to the Bitcoin dataset. This observation suggests the potential presence of Ethereum-specific features or patterns that align more effectively with deep learning model architectures. Notably, the GRU model consistently outperforms the LSTM model across RMSE, MAE, and MAPE. These outcomes underscore the GRU model’s capacity as a robust tool for cryptocurrency value prediction. In summary, this study tackles the challenge of cryptocurrency price prediction while emphasizing the promising role of advanced neural network architectures, such as GRU, in enhancing prediction accuracy, thus offering valuable insights into financial forecasting.

  • Research Article
  • 10.7717/peerj-cs.2680
Deep learning-based novel ensemble method with best score transferred-adaptive neuro fuzzy inference system for energy consumption prediction.
  • Feb 21, 2025
  • PeerJ. Computer science
  • Birce Dağkurs + 1 more

Energy consumption predictions for smart homes and cities benefit many from homeowners to energy suppliers, allowing homeowners to understand and manage their future energy consumption, improve energy efficiency, and reduce energy costs. Predictions can help energy suppliers effectively distribute energy on demand. Therefore, from the past to the present, numerous methods have been conducted using collected data, employing both statistical and artificial intelligence (AI)-based approaches, to achieve successful energy consumption predictions. This study proposes a deep learning-based novel ensemble (DLBNE) method with the best score transferred-adaptive neuro fuzzy inference system (BST-ANFIS) as a high-performance and robust approach for energy consumption prediction. The proposed method uses deep learning (DL)-based algorithms, including convolutional neural networks (CNN), recurrent neural networks (RNN), long short-term memory (LSTM), bidirectional long short-term memory (BI-LSTM), and gated recurrent units (GRUs) as base predictors. The BST-ANFIS architecture combines the individual outcomes of these predictors. In order to build a robust and dynamic prediction model, the interaction between the base predictors and the ANFIS architecture is achieved using a best score transfer approach. The performance of the proposed method in energy consumption prediction was verified through five DL methods, five machine learning (ML) methods, and a DL-based weighted average (DLBWA) ensemble method. In experimental studies, the results were obtained from three-stage analyses: fold, average, and periodic performance analyses. In fold analyses, the proposed method, in terms of the root mean square error (RMSE) metric, demonstrated better performance in four folds on the Internet of Things (IoT)-based smart home (IBSH) dataset, two in the homestead city electricity consumption (HCEC) dataset, and two in the individual household power consumption (IHPC) dataset compared to the other methods. In the average performance analyses, it showed significantly higher performance than the other methods in all metrics for the IBSH and IHPC datasets, and in metrics except the mean absolute error (MAE) metric for the HCEC dataset. The performance results in terms of RMSE, MAE, mean square error (MSE), and mean absolute percentage error (MAPE) metrics from these analyses were obtained as 0.001531, 0.001010, 0.0000031, and 0.001573 for the IBSH dataset; 0.025208, 0.005889, 0.001884, and 0.000137 for the HCEC dataset; and 0.013640, 0.006572, 0.000356, and 0.000943 for the IHPC dataset, respectively. The results of the 120-h periodic analyses also showed that the proposed method yielded a better prediction result than the other methods. Furthermore, a comparison of the proposed method with similar studies in the literature revealed that it demonstrated competitive performance in relation to the methods employed in those studies.

  • Research Article
  • 10.37391/ijeer.130208
Application of LSTM and GRU Neural Networks in Forecasting the Power Output of Wind Power Plant
  • May 30, 2025
  • International Journal of Electrical and Electronics Research
  • Dan Bui Thi Tuyet + 4 more

This paper proposes the application of artificial intelligence to forecast the generation capacity of wind power plants by processing data through noise reduction and filtering. It subsequently employs Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks for training, testing, and evaluation. Processing the initial data will help minimize noise and reduce the data space. The study focuses on preprocessing methods and selecting the appropriate neural network between LSTM and GRU. The initial data processing will assess the similarity through the Spearman rank correlation coefficient. The data used in the paper is taken from local wind turbines. The processed data will be input into the neural network for evaluation based on Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Percent Error (PE), and training time. The entire network simulation and evaluation process is performed using MATLAB software. The simulation results show the feasibility and suitability of the GRU network model combined with noise filtering methods, bringing high accuracy and less training time compared to the LSTM network. Specifically, the analysis contrasting the GRU network with the optimized dataset and the LSTM with the unprocessed dataset is more effective than a difference in RMSE of 15.592 and MAPE of 611.047%. Moreover, the time difference between the GRU and LSTM networks with the same dataset has a much earlier time difference from 6 to 28 seconds.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 26
  • 10.1007/s10489-022-03251-7
A hybrid extreme learning machine model with harris hawks optimisation algorithm: an optimised model for product demand forecasting applications
  • Jan 26, 2022
  • Applied Intelligence
  • Koushiki Dasgupta Chaudhuri + 1 more

Accurate and real-time product demand forecasting is the need of the hour in the world of supply chain management. Predicting future product demand from historical sales data is a highly non-linear problem, subject to various external and environmental factors. In this work, we propose an optimised forecasting model - an extreme learning machine (ELM) model coupled with the Harris Hawks optimisation (HHO) algorithm to forecast product demand in an e-commerce company. ELM is preferred over traditional neural networks mainly due to its fast computational speed, which allows efficient demand forecasting in real-time. Our ELM-HHO model performed significantly better than ARIMA models that are commonly used in industries to forecast product demand. The performance of the proposed ELM-HHO model was also compared with traditional ELM, ELM auto-tuned using Bayesian Optimisation (ELM-BO), Gated Recurrent Unit (GRU) based recurrent neural network and Long Short Term Memory (LSTM) recurrent neural network models. Different performance metrics, i.e., Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) and Mean Percentage Error (MPE) were used for the comparison of the selected models. Horizon forecasting at 3 days and 7 days ahead was also performed using the proposed approach. The results revealed that the proposed approach is superior to traditional product demand forecasting models in terms of prediction accuracy and it can be applied in real-time to predict future product demand based on the previous week’s sales data. In particular, considering RMSE of forecasting, the proposed ELM-HHO model performed 62.73% better than the statistical ARIMA(7,1,0) model, 40.73% better than the neural network based GRU model, 34.05% better than the neural network based LSTM model, 27.16% better than the traditional non-optimised ELM model with 100 hidden nodes and 11.63% better than the ELM-BO model in forecasting product demand for future 3 months. The novelty of the proposed approach lies in the way the fast computational speed of ELMs has been combined with the accuracy gained by tuning hyperparameters using HHO. An increased number of hyperparameters has been optimised in our methodology compared to available models. The majority of approaches to improve the accuracy of ELM so far have only focused on tuning the weights and the biases of the hidden layer. In our hybrid model, we tune the number of hidden nodes, the number of input time lags and even the type of activation function used in the hidden layer in addition to tuning the weights and the biases. This has resulted in a significant increase in accuracy over previous methods. Our work presents an original way of performing product demand forecasting in real-time in industry with highly accurate results which are much better than pre-existing demand forecasting models.

  • Conference Article
  • 10.20906/cba2024/4724
Forecasting demand in data-limited environments: a benchmarking study
  • Oct 18, 2024
  • Walquiria N Silva + 6 more

Analysis of electricity consumption profiles is essential for effective load management and the promotion of energy efficiency, thus contributing to sustainable development. However, the limited availability of data, whether due to scarcity, lack of adequate monitoring infrastructure, or privacy issues, poses significant challenges. Despite these limitations, it is essential to investigate the feasibility of using available data for accurate forecasting. This study proposes to forecast electricity demand in university buildings, considering scenarios with limited data. The methodology involves comparative benchmarking of machine learning models, such as recurrent neural networks (RNN), long short-term memory (LSTM) networks, gated recurrent units (GRU), and convolutional neural networks (CNN), with the autoregressive integrated moving average (ARIMA) model. The performance of these models is evaluated using metrics such as mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), as well as generalizing the methodology to different buildings. The results show that the deep learning models outperform ARIMA in several scenarios, with the GRU model standing out for its accuracy, which is 2.38% higher than ARIMA. The evaluation of the proposed forecasting models provides an objective reference for identifying best practices in electricity demand forecasting, validating their usefulness in practical scenarios, and guiding future investments in research aimed at developing energy management systems and energy efficiency strategies.

  • Research Article
  • 10.3389/fdata.2025.1666962
Application and comparison of ARIMA, LSTM, and ARIMA-LSTM models for predicting foodborne diseases in Liaoning Province.
  • Nov 12, 2025
  • Frontiers in big data
  • Xiaoxiao Du + 6 more

To compare the application of the ARIMA model, the Long Short-Term Memory (LSTM) model and the ARIMA-LSTM model in forecasting foodborne disease incidence. Monthly case data of foodborne diseases in Liaoning Province from January 2015 to December 2023 were used to construct ARIMA, LSTM, and ARIMA-LSTM models. These three models were then applied to forecast the monthly incidence of foodborne diseases in 2024, and their predictions were compared with those of a baseline model. Model performance was evaluated by comparing the predicted and observed values using root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), allowing identification of the optimal model. The best-performing model was subsequently employed to predict the monthly incidence for 2025. The ARIMA-LSTM model was identified as the optimal model. Specifically, the ARIMA (2,0,0) (0,1,1)12 model produced RMSE = 300.03, MAE = 187.11, and MAPE = 16.38%, while the LSTM model yielded RMSE = 408.71, MAE = 226.03, and MAPE = 17.21%. In contrast, the ARIMA-LSTM model achieved RMSE = 0.44, MAE = 0.44, and MAPE = 0.08%, representing a dramatic improvement over the baseline model (RMSE = 204.17, MAE = 146.75, MAPE = 15.62%), with reductions of 99.5%, 99.7%, and 99.4% in RMSE, MAE, and MAPE, respectively. Based on the ARIMA-LSTM model, the predicted monthly cases of foodborne diseases for 2025 are: 214.62 (Jan), 260.84 (Feb), 462.92 (Mar), 590.92 (Apr), 800.88 (May), 965.11 (Jun), 2410.36 (Jul), 2651.36 (Aug), 1711.15 (Sep), 941.22 (Oct), 628.21 (Nov), and 465.05 (Dec). The ARIMA-LSTM model is considered the optimal model for predicting foodborne disease incidence in Liaoning Province in 2025.

  • Research Article
  • Cite Count Icon 5
  • 10.1007/s11356-023-31148-6
A novel hybrid model based on two-stage data processing and machine learning for forecasting chlorophyll-a concentration in reservoirs.
  • Nov 28, 2023
  • Environmental science and pollution research international
  • Wenqing Yu + 4 more

The accurate and efficient prediction of chlorophyll-a (Chl-a) concentration is crucial for the early detection of algal blooms in reservoirs. Nevertheless, predicting Chl-a concentration in multivariate time series poses a significant challenge due to the complex interrelationships within the aquatic environment and the discrete and non-stationary nature of online monitoring of water quality data. To address the aforementioned issue, this paper proposes a novel prediction model named SGMD-KPCA-BiLSTM (SKB) for predicting Chl-a concentration. The model combines two-stage data processing and machine learning (ML). To capture nonlinear relationships in multivariate time series data, the optimal data subset is determined by combining symplectic geometry mode decomposition (SGMD) and kernel principal component analysis (KPCA). This subset is then input into a bidirectional long short-term memory (BiLSTM) model, and the model's hyperparameters are optimized using the sparrow search algorithm (SSA) to improve the accuracy of predictions. The performance of the model was evaluated at Qiaodian Reservoir in Shandong, China. To assess its superiority, the evaluation criteria included the root mean square error (RMSE), mean absolute percentage error (MAPE), mean absolute error (MAE), coefficient of determination (R2), frequency histograms of the prediction error, and the Taylor diagram. The prediction performance of five single models, namely the back-propagation (BP) neural network, support vector regression (SVR), long short-term memory (LSTM), convolutional neural network with long short-term memory (CNN-LSTM), and BiLSTM, as well as three hybrid models, namely SGMD-LSTM, SGMD-KPCA-LSTM, and SGMD-BiLSTM, were compared against the SKB model. The results demonstrated that the SKB model performs best in predicting Chl-a concentration (R2 = 96.19%, RMSE = 1.05, MAE = 0.65, MAPE = 0.08). It significantly reduced the prediction error compared to other models for comparison. Furthermore, the multi-step predictive capabilities of the SKB model are also discussed. The analysis shows a decline in predictive performance with larger prediction time steps, and the SKB model exhibits slightly superior performance compared to the other model at corresponding prediction intervals. The model has significant advantages in terms of its ability to accurately predict the non-smooth and nonlinear Chl-a sequences observed by the online monitoring system. This study presents a potential solution for controlling and preventing reservoir eutrophication, as well as an innovative approach for predicting water quality.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 23
  • 10.3390/ijerph192416798
Forecasting the Status of Municipal Waste in Smart Bins Using Deep Learning
  • Dec 14, 2022
  • International Journal of Environmental Research and Public Health
  • Sabbir Ahmed + 3 more

The immense growth of the population generates a polluted environment that must be managed to ensure environmental sustainability, versatility and efficiency in our everyday lives. Particularly, the municipality is unable to cope with the increase in garbage, and many urban areas are becoming increasingly difficult to manage. The advancement of technology allows researchers to transmit data from municipal bins using smart IoT (Internet of Things) devices. These bin data can contribute to a compelling analysis of waste management instead of depending on the historical dataset. Thus, this study proposes forecasting models comprising of 1D CNN (Convolutional Neural Networks) long short-term memory (LSTM), gated recurrent units (GRU) and bidirectional long short-term memory (Bi-LSTM) for time series prediction of public bins. The execution of the models is evaluated by Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Coefficient determination (R2) and Root Mean Squared Error (RMSE). For different numbers of epochs, hidden layers, dense layers, and different units in hidden layers, the RSME values measured for 1D CNN, LSTM, GRU and Bi-LSTM models are 1.12, 1.57, 1.69 and 1.54, respectively. The best MAPE value is 1.855, which is found for the LSTM model. Therefore, our findings indicate that LSTM can be used for bin emptiness or fullness prediction for improved planning and management due to its proven resilience and increased forecast accuracy.

  • Research Article
  • Cite Count Icon 3
  • 10.31015/jaefs.2024.2.9
Innovation in the dairy industry: forecasting cow cheese production with machine learning and deep learning models
  • Jun 27, 2024
  • International Journal of Agriculture Environment and Food Sciences
  • Yunus Emre Gür

This study focuses on the use of deep learning and machine learning models to forecast cow cheese production in Turkey. In particular, our research utilizes the LSTM (long short-term memory) model to forecast cow cheese production for the next 12 months by extensively utilizing deep learning and machine learning techniques that have not been applied in this field before. In addition to LSTM, models such as GRU (Gated Recurrent Unit), MLP (Multi-Layer Perceptron), SVR (Support Vector Regression), and KNN (K-Nearest Neighbors) were also tested, and their performances were compared using RMSE (Root Mean Square Error), MSE (Mean Squared Error), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), and (Coefficient of Determination) metrics. The findings revealed that the LSTM model performed significantly better than the other models in terms of RMSE, MSE, MAE, and MAPE values. This result indicates that the LSTM model provides high accuracy and reliability in forecasting cow cheese production. This achievement of the model offers important applications in areas such as supply chain management, inventory optimization, and demand forecasting in the dairy industry.

  • Research Article
  • 10.3233/idt-200167
On the domain aided performance boosting technique for deep predictive networks: A COVID-19 scenario
  • Apr 18, 2022
  • Intelligent Decision Technologies
  • Soumya Jyoti Raychaudhuri + 1 more

Deep learning models are one of the widely used techniques for forecasting time series data in various applications. It has already been established that the Recurrent Neural Networks (RNN) such as the Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), etc., perform well in analyzing sequence data for accurate time-series predictions. But, these specialized recurrent architectures suffer from certain drawbacks due to their computational complexity and also their dependency on short term historical data. Hence, there is a scope for further improvement. This paper analyzes the effects of various optimizers and hyper-parameter tuning, on the precision and time efficiency of different deep neural architectures. The analysis has been conducted on COVID-19 pandemic data. Since Convolutional Neural Networks (CNN) are known for their super-human ability in identifying patterns from images, the time-series data has been transformed into a slope-information domain for analyzing the slope patterns over time. The domain patterns have been projected on a 2D plane for further analysis using a restricted recursive CNN (RRCNN) algorithm. The experimental results reveal that the proposed methodology reduces the error over benchmarked sequence models by almost 20% and further reduces the training time by nearly 50%. The prediction models considered in this study have been evaluated using Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE%).

  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.asoc.2024.111557
Hidden Markov guided Deep Learning models for forecasting highly volatile agricultural commodity prices
  • Apr 1, 2024
  • Applied Soft Computing
  • G Avinash + 9 more

Hidden Markov guided Deep Learning models for forecasting highly volatile agricultural commodity prices

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.