Abstract

Reservoir computing was achieved by constructing a network of virtual nodes multiplexed in time and sharing a single silicon beam exhibiting a classical Duffing non-linearity as the source of nonlinearity. The delay-coupled electromechanical system performed well on time series classification tasks, with error rates below 0.1% for the 1st, 2nd, and 3rd order parity benchmarks and an accuracy of (78±2)% for the TI-46 spoken word recognition benchmark. As a first demonstration of reservoir computing using a non-linear mass-spring system in MEMS, this result paves the way to the creation of a new class of compact devices combining the functions of sensing and computing.

Highlights

  • The discovery of faster numerical methods to adjust the parameters of artificial neural networks has led in the last decade to a resurgence of interest in using these networks to implement complex functions, which are constructed from a finite training set of examples, and which can exhibit impressive generalization capabilities when applied to inputs which were not part of the training set.1 Recurrent neural networks (RNN) are especially efficient at modeling time dependent data,2,3 as they feed information from the “top” parts of the network at a given iteration to “lower” parts of the network at the iteration

  • RNN form a so-called reservoir computer (RC), in which case the weights of the recurrent network are initialized randomly and are left untrained, while the weights of a simple output layer are adjusted to train the network for a desired output

  • The concept of RC has led to interesting numerical applications7–9 but, more importantly, it has been the trigger for a variety of hardware implementations of computing systems with functionalities similar to those of artificial neural networks

Read more

Summary

Introduction

The discovery of faster numerical methods to adjust the parameters of artificial neural networks (aka training) has led in the last decade to a resurgence of interest in using these networks to implement complex functions, which are constructed from a finite (albeit large) training set of examples, and which can exhibit impressive generalization capabilities when applied to inputs which were not part of the training set. Recurrent neural networks (RNN) are especially efficient at modeling time dependent data, as they feed information from the “top” parts of the network at a given iteration to “lower” parts of the network at the iteration. Recurrent neural networks (RNN) are especially efficient at modeling time dependent data, as they feed information from the “top” parts of the network at a given iteration to “lower” parts of the network at the iteration They are universal computers (in the sense described in Ref. 4) but are considered difficult to train.. The concept of RC has led to interesting numerical applications but, more importantly, it has been the trigger for a variety of hardware implementations of computing systems with functionalities similar to those of artificial neural networks. In these hardware implementations, the dynamics of a physical system are (often) left untrained and provide memory and non-linear computing capabilities to a simple, trainable output system. Hardware RC results have been reported for optical systems, mechanical devices, memristor arrays, and spintronic devices, for instance

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.