Abstract

The echo state network (ESN) employs a huge reservoir with sparsely and randomly connected internal nodes and only trains the output weights, which avoids the suboptimal problem, exploding and vanishing gradients, high complexity and other disadvantages faced by traditional recurrent neural network (RNN) training. In light of the outstanding adaption to nonlinear dynamical systems, ESN has been applied into a wide range of applications. However, in the era of Big Data, with an enormous amount of data being generated continuously every day, the data are often distributed and stored in real applications, and thus the centralized ESN training process is prone to being technologically unsuitable. In order to achieve the requirement of Big Data applications in the real world, in this study we propose an algorithm and its implementation for distributed ESN training. The mentioned algorithm is based on the parallel particle swarm optimization (P-PSO) technique and the implementation uses Spark, a famous large-scale data processing framework. Four extremely large-scale datasets, including artificial benchmarks, real-world data and image data, are adopted to verify our framework on a stretchable platform. Experimental results indicate that the proposed work is accurate in the era of Big Data, regarding speed, accuracy and generalization capabilities.

Highlights

  • As a mimic of the human brain, neural-like architectures have been proposed to learn, memorize, steer and game in a complex world [1]; one of the most famous and widely used structures is the feed-forward neural network [2,3,4]

  • The authors of References [22,39] believe the decentralized average consensus (DAC) method should be an acceptable choice for the consistent problem in decentralized training, and propose an algorithm based on the use of DAC, an extremely efficient procedure for computing global averages on a network which starts from local measurement vectors

  • Recent works show that particle swarm optimization (PSO) holds the advantages of faster computing speed, good global search capability, adaptive capability, and the parallel particle swarm optimization (P-PSO) is suitable for large-scale mathematical optimization problems, and it is used in this paper

Read more

Summary

Introduction

As a mimic of the human brain, neural-like architectures have been proposed to learn, memorize, steer and game in a complex world [1]; one of the most famous and widely used structures is the feed-forward neural network [2,3,4]. Since relational weight-update algorithms are mostly based on gradient-descend, they unavoidably lead to suboptimal solutions [6] Despite this disadvantage, the difficult of training RNNs includes slowly convergence, high complexity [6,7] and the most well-known problem: the exploding and vanishing gradients [8]. We propose a distributed training algorithm and platform for ESN. Proposing a parallel optimization algorithm based on PSO to handle the distributed training of the ESN, constructing a distributed and stretchable platform based on Spark to implement the proposed algorithm, verifying the proposed platform by extremely large-scale datasets that include artificial benchmarks, real-world data and image data.

Improvements in ESN
Decentralized Training Algorithm
Distributed Framework
Researches on CNN
Basic Algorithms
Distributed Algorithm
Spark-Based Platform
Training
Setup end if
Datasets
Parameters
Analysis
Training distribution
Testing Error
Number
Number of Iterations
Findings
Discussion and Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call