Abstract

A critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem. Exactly implementing a solution like backpropagation involves weight sharing, which requires additional bandwidth and computations in a neuromorphic system. Instead, models of learning from neuroscience can provide inspiration for how to communicate error information efficiently, without weight sharing. Here we present a novel dendritic event-based processing (DEP) algorithm, using a two-compartment leaky integrate-and-fire neuron with partially segregated dendrites that effectively solves the credit assignment problem. In order to optimize the proposed algorithm, a dynamic fixed-point representation method and piecewise linear approximation approach are presented, while the synaptic events are binarized during learning. The presented optimization makes the proposed DEP algorithm very suitable for implementation in digital or mixed-signal neuromorphic hardware. The experimental results show that spiking representations can rapidly learn, achieving high performance by using the proposed DEP algorithm. We find the learning capability is affected by the degree of dendritic segregation, and the form of synaptic feedback connections. This study provides a bridge between the biological learning and neuromorphic learning, and is meaningful for the real-time applications in the field of artificial intelligence.

Highlights

  • Learning requires assigning credit to each neuron for its contribution to the final output (Bengio et al, 2015; Lillicrap et al, 2016)

  • A piecewise linear approximation and a dynamic fixed-point representation are first introduced in the dendritic learning framework for cost and energy efficient neuromorphic computing

  • This paper presented a biologically meaningful dendritic event-based processing (DEP) algorithm with dynamic fixed-point representation, as well as its digital neuromorphic architecture on large-scale conductance-based spiking neural network (LaCSNN)

Read more

Summary

Introduction

Learning requires assigning credit to each neuron for its contribution to the final output (Bengio et al, 2015; Lillicrap et al, 2016). One problem is known as weight transport: backpropagation utilizes a feedback structure with the exact same weights as the feedforward pathway to communicate gradients (Liao et al, 2016) This symmetric feedback structure has not been proven to exist in biological neural circuit. Recent work shows how spiking neural networks can implement feedback structures that allow efficient solving of the credit assignment problem by dendritic computation (Urbanczik and Senn, 2014; Wilmes et al, 2016; Bono and Clopath, 2017; Guerguiev et al, 2017). Other work has shown that even feedback systems that crudely approximate the true feedback weights can solve some learning tasks (Zenke and Ganguli, 2018; Lee et al, 2020) Together these works show that the credit assignment problem can be largely solved by biologically plausible neural systems

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call