Abstract

Large-scale neural networks have been widely used in data processing applications. As a special type of neural network, the recurrent neural network (RNN) is equipped with additional recurrent connections. This unique architecture enables the RNN to memory the processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train an RNN. In this paper, we aim to develop parallel approaches for a particular type of RNN, known as echo state network (ESN). We first decompose the training problem of a large-scale ESN into a number of smaller subproblems. Next, we introduce the alternating direction method of multipliers (ADMM) to solve the optimization problem. Then, two parallel algorithms are proposed to train the ESN across the learning agents, which restricting communication to the fusion of achieving an overall decision. The experiment results of the prediction of the Mackey-Glass chaotic time-series demonstrate that the proposed algorithms for a large-scale ESN are able to reach a comparable accuracy achieved by a single ESN on the same dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call