Abstract

Abstract Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time. These states must be kept in memory for an undefined period of time which is task-dependent and costly for edge devices. This paper uses the reservoir computing paradigm where an untrained recurrent pool of neurons is used as a preprocessor for temporally structured inputs and with a limited number of training data samples. These so-called reservoirs usually require either extensive fine-tuning or neuroplasticity. We propose a new local and unsupervised plasticity rule named P-CRITICAL designed for automatic reservoir tuning that translates well to physical and digital neuromorphic processors. The spiking neuronal architecture implementation is simulated on the Loihi research chip from Intel and on a conventional CPU. Comparisons on state-of-the-art machine learning datasets are given. Improved performance on visual and auditory tasks are observed. There is no need to a priori tune the reservoir when switching between tasks, making this approach suitable for physical implementations. Furthermore, such plastic behaviour of the reservoir is a key to end-to-end energy-efficient neuromorphic-based machine learning on edge devices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.