Abstract

Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (Sci Rep 2:514, 2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.

Highlights

  • Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires

  • We proposed three methods to reduce the size of an RC reservoir without impairing performance

  • We found that the value of the total information processing capacity (IPC) almost reaches Nres(P + 1) using the proposed methods, whereas the importance of their components changes drastically

Read more

Summary

Introduction

Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. The standard learning algorithms for recurrent neural networks, which include backpropagation through t­ime[2] and its v­ ariants[3], require large computational resources These computational burdens often hinder real-world applications, especially when computing is performed near end users or data sources instead of data centers. (trained) numerous types of implementation employing physical systems, such as p­ hotonics32–34, ­spintronics[35], mechanical ­oscillators[36], and analog integrated electronic ­circuits[37,38], have been ­demonstrated[39] These implementations have exhibited the superiority of RC in computational speed and energy efficiency, the maximum size of the reservoir and, in turn, the forecasting accuracy, is limited by the physical size of the hardware

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call