Abstract

This paper develops an incremental randomized learning method for an extended Echo State Network ( $\varphi $ -ESN), which has a reservoir with random static projection, to better cope with non-linear time series data modelling problems. Although the typical ESN can effectively improve the prediction performance of the network by extending a random static nonlinear hidden layer, since the input weights and biases of the hidden neurons in the extended static layer are randomly assigned, some neurons have little effect on reducing the model error, resulting in high model complexity, poor generalization and large performance fluctuation. A constructive incremental randomized learning method termed OLS- $\varphi $ -ESN is proposed for generating the nodes of the extended static nonlinear hidden layer. Two-step training paradigm is adopted, namely, randomly assigning the input weights and biases of the hidden neurons in the extended static layer according to a supervisory mechanism and solving output weights by least squares algorithm. Based on Orthogonal Least Squares (OLS) search algorithm, the proposed supervisory mechanism is designed where an adaptive threshold is also set to better control the compactness of the generated learner model. Simulation results concerning both nonlinear time series prediction and system identification tasks indicate some advantages of our proposed OLS- $\varphi $ -ESN in terms of more compact model and sound generalization.

Highlights

  • The prediction and modeling of nonlinear time series has always been a very important research topic in the field of machine learning

  • By analyzing the experimental results of two benchmark nonlinear time series modeling tasks, we find that the incremental randomized learning algorithms can improve the prediction performance and generalization ability of randomized learner model

  • We find that the incremental randomized learning method based on Orthogonal Least Squares (OLS) has better prediction performance than the incremental randomized learning method based on orthogonal matching pursuit algorithm (OMP)

Read more

Summary

Introduction

The prediction and modeling of nonlinear time series has always been a very important research topic in the field of machine learning. The Recurrent Neural Network (RNN) has attracted extensive attention due to its good memory and nonlinear mapping capabilities, making it an ideal tool for dealing with nonlinear time series prediction and modeling problems. The traditional RNN has some problems such as complex training process, slow convergence, local minima, and exploding and vanishing gradients, which limit its application in practical scenarios. The distinguishing feature of RC is that as long as certain properties are met, the recurrent part of the network can be randomly generated without training. The training process is limited to the non-recurrent output part of the network, which results in a very simple and efficient RNN design

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call