Abstract

Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble the dynamics of biological neurons than conventional artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of the processing. Learning in the hardware SNNs is a more challenging task, however. The conventional supervised learning methods cannot be directly applied to SNNs due to the non-differentiable event-based nature of their activation. For this reason, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is a particularly promising learning method for neuromorphic implementation, especially in the field of autonomous agents' control. An SNN realization of a bio-inspired RL model is in the focus of this work. In particular, in this article, we propose a new digital multiplier-less hardware implementation of an SNN with RL capability. We show how the proposed network can learn stimulus-response associations in a context-dependent task. The task is inspired by biological experiments that study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. The proposed hardware SNN model is compared both to data from animal experiments and to a computational model. We perform a comparison to the behavioral experiments using a robot, to show the learning capability in hardware in a closed sensory-motor loop.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call