Abstract

We describe a new technique which minimizes the amount of neurons in the hidden layer of a random recurrent neural network (rRNN) for time series prediction. Merging Takens-based attractor reconstruction methods with machine learning, we identify a mechanism for feature extraction that can be leveraged to lower the network size. We obtain criteria specific to the particular prediction task and derive the scaling law of the prediction error. The consequences of our theory are demonstrated by designing a Takens-inspired hybrid processor, which extends a rRNN with a priori designed delay external memory. Our hybrid architecture is therefore designed including both, real and virtual nodes. Via this symbiosis, we show performance of the hybrid processor by stabilizing an arrhythmic neural model. Thanks to our obtained design rules, we can reduce the stabilizing neural network's size by a factor of 15 with respect to a standard system.

Highlights

  • Artificial neural networks (ANNs) are systems prominently used in computational science as well as investigations of biological neural systems

  • We describe a technique which minimizes the amount of neurons in the hidden layer of a random recurrent neural network for time series prediction

  • We introduce a technique that allows us to identify feature representations of the input information in the recurrent neural network (rRNN)’s high-dimensional space, which are linked to good prediction performances

Read more

Summary

Introduction

Artificial neural networks (ANNs) are systems prominently used in computational science as well as investigations of biological neural systems. RNNs have been used to solve highly complex tasks which pose problems to other classical computational approaches [2,3,4,5,6]. Their recurrent architecture allows the generation of internal dynamics, and RNNs can be studied utilizing principles of dynamical systems theory. Optimal computational performances are often achieved in a stable equilibrium network’s state, yet near criticality [7,8]

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call