Abstract

• In order to reestablish the random initialization input weights of ESN, a single layer ESN with a global reversible AE (GRAE) algorithm is proposed to ensure that the outputs of GRAE are remarkably correlated with the input data. • The reservoir layer with a reversible function is calculated by pulling the ESN output back and injecting it into the reservoir layer. Thus, the feature learning is enriched by additional information, which results in a better performance. • The GRAE which is based on ridge regression to solve a least squares problem is entirely different from existing back-propagation (BP) based AEs. So, the training speed of GRAE is many times faster than that of the BP based AEs. • In existing ESN autoencoder methods, all weights in the reservoir layer are generated randomly. The simple cycle connection is utilized in the reservoir layer of ESN to make sure the reservoir weights can be generated deterministically. An echo state network (z) can provide an efficient dynamic solution for predicting time series problems. However, in most cases, ESN models are applied for predictions rather than classifications. The applications of ESN in time series classification (TSC) problems have yet to be fully studied. Moreover, the conventional randomly generated ESN is unlikely to be optimal because of the randomly generated input and reservoir weights, which are not always guaranteed to be optimal. Randomly generating all layer weights is improper, because a purely random layer might destroy the useful features. To overcome this disadvantage, this study provides a new input weight establishment framework of ESN based on autoencoder (AE) theory for TSC tasks. A global reversible AE (GRAE) algorithm is proposed to reestablish the random initialization input weights of the ESN. In existing ESN-AEs, the output weights obtained in the encoding process are directly reused as the initial input weights. By contrast, in GRAE, the reservoir layer with a reversible activation function is calculated by pulling the decoding layer output back and injecting it into the reservoir layer. Thus, feature learning is enriched by additional information, which results in improved performance. The current weights of the encoding layer are iteratively replaced by the decoding layer to ensure that the outputs of the GRAE are remarkably correlated with the input data. Visualization analyses and experiments of the input weights on a massive set of UCR time series datasets indicate that the proposed GRAE method can considerably improve the original two-layer ESN-based classifiers and the proposed GRAE-ESN classifier yields better performance compared with traditional state-of-the-art TSC classifiers. Furthermore, the proposed method can provide comparable performance and considerably faster training speed compared with three deep learning classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call