MULTIPLICATIVE NEURON MODEL BASED ON SINE COSINE ALGORITHM FOR TIME SERIES PREDICTION

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Time series prediction is a method to predict the system behavior in the future based on current given data. Neural Networks (NNs) approach is a well-known technique that is useful for time series prediction. In the literature many NN models such Multilayer Perceptron (MLP), Pi-Sigma NN (PSNN), Recurrent NN etc. are proposed for solving time series prediction. In this paper, we use Multiplicative Neuron Model (MNM) to predict time series. For training this model, we propose use newly developed evolutionary optimization algorithm called Sine Cosine algorithm (SCA), and this algorithm has not been used as far as we know in training the MNM. The proposed SCA-MNM model is employed for the most known time series problems. In this paper, the application of the SCA-MNM on time prediction is illustrated using two mostly used datasets Mackey-Glass time series dataset, Box-Jenkins gas furnace dataset. To investigate the effect of the proposed SCA-MNM model, comparisons were made with some of the results given in the literature.

Similar Papers
  • Dissertation
  • Cite Count Icon 9
  • 10.24377/ljmu.t.00005879
Higher order neural networks for financial time series prediction
  • Jan 1, 2007
  • Rozaida Ghazali

Neural networks have been shown to be a promising tool for forecasting financial times series. Numerous research and applications of neural networks in business have proven their advantage in relation to classical methods that do not include artificial intelligence. What makes this particular use of neural networks so attractive to financial analysts and traders is the fact that governments and companies benefit from it to make decisions on investment and trading. However, when the number of inputs to the model and the number of training examples becomes extremely large, the training procedure for ordinary neural network architectures becomes tremendously slow and unduly tedious. To overcome such time-consuming operations, this research work focuses on using various Higher Order Neural Networks (HONNs) which have a single layer of learnable weights, therefore reducing the networks' complexity. In order to predict the upcoming trends of univariate financial time series signals, three HONNs models; the Pi-Sigma Neural Network, the Functional Link Neural Network, and the Ridge Polynomial Neural Network were used, as well as the Multilayer Perceptron. Furthermore, a novel neural network architecture which comprises of a feedback connection in addition to the feedforward Ridge Polynomial Neural Network was constructed. The proposed network combines the properties of both higher order and recurrent neural networks, and is called Dynamic Ridge Polynomial Neural Network (DRPNN). Extensive simulations covering ten financial time series were performed. The forecasting performance of various feedforward HONNs models, the Multilayer Perceptron and the novel DRPNN was compared. Simulation results indicate that HONNs, particularly the DRPNN in most cases demonstrated advantages in capturing chaotic movement in the financial signals with an improvement in the profit return over other network models. The relative superiority of DRPNN to other networks is not just its ability to attain high profit return, but rather to model the training set with fast learning and convergence. The network offers fast training and shows considerable promise as a forecasting tool. It is concluded that DRPNN do have the capability to forecast the financial markets, and individual investor could benefit from the use of this forecasting.

  • Single Book
  • Cite Count Icon 11
  • 10.1007/3-540-44869-1
Artificial Neural Nets Problem Solving Methods
  • Jan 1, 2003
  • José R Álvarez

Artificial Neural Nets Problem Solving Methods

  • Research Article
  • Cite Count Icon 44
  • 10.1016/j.sbspro.2013.12.593
Recurrent Multiplicative Neuron Model Artificial Neural Network for Non-linear Time Series Forecasting
  • Jan 1, 2014
  • Procedia - Social and Behavioral Sciences
  • Erol Egrioglu + 3 more

Recurrent Multiplicative Neuron Model Artificial Neural Network for Non-linear Time Series Forecasting

  • Research Article
  • Cite Count Icon 59
  • 10.1007/s11063-014-9342-0
Recurrent Multiplicative Neuron Model Artificial Neural Network for Non-linear Time Series Forecasting
  • Jan 28, 2014
  • Neural Processing Letters
  • Erol Egrioglu + 3 more

Artificial neural networks (ANN) have been widely used in recent years to model non-linear time series since ANN approach is a responsive method and does not require some assumptions such as normality or linearity. An important problem with using ANN for time series forecasting is to determine the number of neurons in hidden layer. There have been some approaches in the literature to deal with the problem of determining the number of neurons in hidden layer. A new ANN model was suggested which is called multiplicative neuron model (MNM) in the literature. MNM has only one neuron in hidden layer. Therefore, the problem of determining the number of neurons in hidden layer is automatically solved when MNM is employed. Also, MNM can produce accurate forecasts for non-linear time series. ANN models utilized for non-linear time series have generally autoregressive structures since lagged variables of time series are generally inputs of these models. On the other hand, it is a well-known fact that better forecasts for real life time series can be obtained from models whose inputs are lagged variables of error. In this study, a new recurrent multiplicative neuron neural network model is firstly proposed. In the proposed method, lagged variables of error are included in the model. Also, the problem of determining the number of neurons in hidden layer is avoided when the proposed method is used. To train the proposed neural network model, particle swarm optimization algorithm was used. To evaluate the performance of the proposed model, it was applied to a real life time series. Then, results produced by the proposed method were compared to those obtained from other methods. It was observed that the proposed method has superior performance to existing methods.

  • Research Article
  • Cite Count Icon 9
  • 10.30630/joiv.3.3.281
Neural Network Techniques for Time Series Prediction: A Review
  • Aug 11, 2019
  • JOIV : International Journal on Informatics Visualization
  • Muhammad Faheem Mushtaq + 4 more

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series. Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.

  • Research Article
  • Cite Count Icon 53
  • 10.1016/j.asoc.2016.08.029
Evaluation of co-evolutionary neural network architectures for time series prediction with mobile application in finance
  • Aug 31, 2016
  • Applied Soft Computing
  • Rohitash Chandra + 1 more

Evaluation of co-evolutionary neural network architectures for time series prediction with mobile application in finance

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.14569/ijacsa.2020.0110337
A Modified Weight Optimization for Artificial Higher Order Neural Networks in Physical Time Series
  • Jan 1, 2020
  • International Journal of Advanced Computer Science and Applications
  • Noor Aida Husaini + 4 more

Many methods and approaches have been proposed for analyzing and forecasting time series data. There are different Neural Network (NN) variations for specific tasks (e.g., Deep Learning, Recurrent Neural Networks, etc.). Time series forecasting are a crucial component of many important applications, from stock markets to energy load forecasts. Recently, Swarm Intelligence (SI) techniques including Cuckoo Search (CS) have been established as one of the most practical approaches in optimizing parameters for time series forecasting. Several modifications to the CS have been made, including Modified Cuckoo Search (MCS) that adjusts the parameters of the current CS, to improve algorithmic convergence rates. Therefore, motivated by the advantages of these MCSs, we use the enhanced MCS known as the Modified Cuckoo Search-Markov Chain Monté Carlo (MCS-MCMC) learning algorithm for weight optimization in Higher Order Neural Networks (HONN) models. The Lévy flight function in the MCS is replaced with Markov Chain Monté Carlo (MCMC) since it can reduce the complexity in generating the objective function. In order to prove that the MCS-MCMC is suitable for forecasting, its performance was compared with the standard Multilayer Perceptron (MLP), standard Pi-Sigma Neural Network (PSNN), Pi-Sigma Neural Network-Modified Cuckoo Search (PSNN-MCS), Pi-Sigma Neural Network-Markov Chain Monté Carlo (PSNN-MCMC), standard Functional Link Neural Network (FLNN), Functional Link Neural Network-Modified Cuckoo Search (FLNN-MCS) and Functional Link Neural Network-Markov Chain Monté Carlo (FLNN-MCMC) on various physical time series and benchmark dataset in terms of accuracy. The simulation results prove that the HONN-based model combined with the MCS-MCMC learning algorithm outperforms the accuracy in the range of 0.007% to 0.079% for three (3) physical time series datasets.

  • Research Article
  • Cite Count Icon 66
  • 10.1016/j.eswa.2010.09.037
Dynamic Ridge Polynomial Neural Network: Forecasting the univariate non-stationary and stationary trading signals
  • Sep 19, 2010
  • Expert Systems with Applications
  • Rozaida Ghazali + 2 more

Dynamic Ridge Polynomial Neural Network: Forecasting the univariate non-stationary and stationary trading signals

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/itca52113.2020.00022
Research on Stock Price Prediction Method Based on Deep Learning
  • Dec 1, 2020
  • Dong Liu + 2 more

The future trend prediction of time series represented by stock price has always been a key research topic in the field of data science. The rapid development of deep learning makes the analysis and prediction of time series enter a new stage. Deep learning algorithm represented by deep neural network can effectively overcome the shortcomings of traditional time series analysis methods. This paper first introduces the principle and structure of the recurrent neural network (RNN) model in the deep neural network. In view of the problem that the gradient vanishes easily and cannot effectively analyze the long sequence data, this paper introduces the gating structure to improve the hidden layer of the RNN, so as to construct the long short-term memory (LSTM) neural network model. In this paper, the LSTM neural network model is applied to the stock price prediction, and the prediction results are compared with the RNN model. The experimental results show that the error value of the LSTM neural network model is smaller than that of the RNN model, and has better prediction effect. Therefore, the LSTM neural network model is more suitable for the prediction of stock price.

  • Research Article
  • Cite Count Icon 79
  • 10.1080/01431161.2021.1947540
Prediction of InSAR deformation time-series using a long short-term memory neural network
  • Jul 7, 2021
  • International Journal of Remote Sensing
  • Yi Chen + 6 more

The prediction of land subsidence is a crucial step for early warning of urban infrastructure damage and timely remedy. However, the performance of most mathematical and empirical prediction models is often compromised by their large number of parameters, complex operational processes and sparsely measured values. Currently, the traditional neural network models are popular and effective, but they cannot accurately discover the characteristic changes of time series data. In this paper, a long short-term memory (LSTM) neural network was proposed to predict the land subsidence of time series Interferometric Synthetic Aperture Radar (InSAR). First, the Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) technique was utilized to monitor the time series land subsidence at Beijing Capital International Airport (BCIA) from 2005 to 2010 based on ENVISAT ASAR images with a descending orbit. The results were compared with the existing results to verify the reliability and then used to analyse the temporal and spatial characteristics of the time series land subsidence of the BCIA. Based on the time series InSAR deformation data, the LSTM neural network was used to establish the prediction model of time series InSAR, and the results were compared with those of the Multi-Layer Perceptron (MLP) and Recurrent Neural Network (RNN). The comparison results showed that the LSTM neural network was more accurate than the MLP and RNN on the point scale (the root mean square error was 4.60 mm and the mean absolute error was 3.18 mm), the correlation coefficients between the prediction results of the LSTM neural network and the real InSAR measurement results in 2007 and 2008 were 0.93 mm and 0.96 mm, respectively, indicating that LSTM neural network had better prediction performance. Eventually, based on the land subsidence data of time series InSAR from 2006 to 2010, the LSTM neural network was applied to predict the BCIA time series land subsidence in 2011. The results predicted that cumulative subsidence in September 2011 would reach a maximum of 350 mm. Therefore, the LSTM neural network is a potentially effective prediction method, which can replace numerical or empirical models in the absence of detailed hydrogeological data. Moreover, its prediction results can be used to assist decision-making, early warning and hazard relief.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1016/b978-012443880-4/50080-6
36 - Time-Series Prediction
  • Jan 1, 2002
  • Expert Systems, Six-Volume Set
  • Hisashi Shimodaira

36 - Time-Series Prediction

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/ccns50731.2020.00047
Robust Time Series Prediction with Missing Data Based on Deep Convolutional Neural Networks
  • Aug 1, 2020
  • Guancheng Zhou

Recurrent Neural Network (RNN) is a class of neural networks for processing sequential data. Accordingly, when we predict some long-term sequence information, such as the flight information within one year, we usually use the RNN. However, as a general recurrent neural network, RNN cannot deal with the temporal data involving a mixture of long-term and short-term patterns. Therefore, we adopt a new deep learning framework, that is, long- and short-term time series network (LSTNet), which is composed of CNN, RNN, skip RNN and AR component. On this basis, the existing LSTNet algorithm only uses the observed data to predict time series without considering the robustness of the overall structure. The prediction error will be greatly increased when some of the data is missing. In this paper, we propose a novel deep learning framework based on LSTNet network, namely Missing data LSTNet network (M-LSTNet), to solve the problem of time series prediction in the presence of missing data in LSTNet network. Compared with the original framework, we add two new algorithms, M-Impute and M-ARIMA. The algorithm M-Impute is used to judge whether the missing data has occurred and compensate the discontinuous time series containing missing data into continuous time series. The later algorithm M-ARIMA uses the time series predicted by ARIMA to replace the continuous time series obtained in the M-Impute so as to improve the original LSTNet framework and solve the negative impact caused by data missing. Based on the deep learning framework M-LSTNet in this paper, we calculate the prediction evaluation of the original time series, the time series containing missing data and the time series improved by our algorithms. The results show that, our compensation algorithm can obtain better prediction effect and improve the stability of the whole deep learning network.

  • Research Article
  • Cite Count Icon 37
  • 10.1016/j.asoc.2022.108836
A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems
  • Apr 18, 2022
  • Applied Soft Computing
  • Rajesh Kumar

A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems

  • Research Article
  • 10.37591/rrjost.v7i3.1688
The Comparison in Time Series Forecasting of Air Traffic Data by Autoregressive Integrated Moving Average Model, Radial Basis Function and Elman Recurrent Neural Networks
  • Feb 13, 2019
  • R S Ramakrishna + 2 more

Nowadays , nonlinear time series and artificial neural networks (ANN) models are used for forecasting in the field of business, agriculture and soon. Recent studies have shown, ANN have been successfully used for forecasting of financial and agriculture data series The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. ANN have more advantages that can approximate to model both linear and nonlinear structures in time series, they are not able to handling both structures equally well. The autoregressive integrated moving average (ARIMA) model and two ANN models namely, Radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) methods were applied to Hyderabad airport traffic data. The data obtained for 15 years from 2002–2003 to 2016–2017 about domestic and international passenger of International Airport of Hyderabad, India. In this research paper, we compared the performances of ARIMA, RBFNN and ERNN were based on three measures: mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). The results showed that RBFNN obtained the smallest MAE, MAPE and RMSE in both the modeling and forecasting processes. The performances of the three models ranked in ascending order were: ARIMA, ERNN and the RBFNN model. Keywords: T ime series, forecasting, artificial neural networks, ARIMA models, radial basis function neural networks, and Elman recurrent neural networks Cite this Article R. Ramakrishna, Berhe Aregay, Tewodros Gebregergs. The Comparison in Time Series Forecasting of Air Traffic Data by Autoregressive Integrated Moving Average Model, Radial Basis Function and Elman Recurrent Neural Networks. Research & Reviews: Journal of Statistics . 2018; 7(3): 75–90p.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/isda.2010.5687293
Proximity fuzzy clustering and its application to time series clustering and prediction
  • Nov 1, 2010
  • Daniel Graves + 1 more

A new time series prediction architecture is introduced using a fuzzy inference system (FIS) and a new framework for fuzzy relational clustering of time series. The FIS is used to predict future samples in a time series where recurrent neural networks comprise the consequents of the rules. The antecedents come in the form of fuzzy relations; however, previous approaches such as FCM build these antecedents in a Euclidean feature space which is very limiting and not well suited to the problem of clustering time series. Our approach to learning the antecedents of the rules involves clustering time series using proximity values, indicative of closeness. A variant of the classical correlation is used to measure proximity. Our objective is to investigate and evaluate the application of proximity fuzzy clustering in the domain of time series prediction by comparing its performance against several commonly used time series prediction models.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.