Abstract

To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.

Highlights

  • In order to illustrate the nonlinear transient dynamics in delay-based reservoir computing, we present a simple example, in which a sin2 function has been chosen as nonlinear function f in Equation (1)

  • Role of the Nonlinearities Reservoir computing based on a single dynamical node with delay has been realized in different experimental implementations, including electronic (Appeltant et al, 2011), opto-electronic (Larger et al, 2012; Paquot et al, 2012)

  • Recurrent artificial neural networks have been designed and used to solve certain tasks that are inherently hard for traditional computers

Read more

Summary

Introduction

Introduction to Reservoir Computing Recurrent neural networks (RNNs), characterized by the existence of closed loops, are ubiquitous in the brain. RNNs are being employed for a family of machine-learning approaches that have been inspired by the way our brain seems to process information. The neuro-inspired approach of RNNs relies on the property that these systems are able to learn from examples, and to process information within a temporal and spatial context (Buonomano and Maass, 2009). RNNs have been proven very powerful to solve certain problems that are computationally hard for standard computers. The optimization or training procedure typically involves

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.