Abstract

There are two principle approaches for learning in artificial intelligence: error-driven global learning and neuroscience-oriented local learning. Integrating them into one network may provide complementary learning capabilities for versatile learning scenarios. At the same time, neuromorphic computing holds great promise, but still needs plenty of useful algorithms and algorithm-hardware co-designs to fully exploit its advantages. Here, we present a neuromorphic global-local synergic learning model by introducing a brain-inspired meta-learning paradigm and a differentiable spiking model incorporating neuronal dynamics and synaptic plasticity. It can meta-learn local plasticity and receive top-down supervision information for multiscale learning. We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors. It achieves significantly higher performance than single-learning methods. We further implement the model in the Tianjic neuromorphic platform by exploiting algorithm-hardware co-designs and prove that the model can fully utilize neuromorphic many-core architecture to develop hybrid computation paradigm.

Highlights

  • There are two principle approaches for learning in artificial intelligence: error-driven global learning and neuroscience-oriented local learning

  • These signals act on many synapses and modulate diverse plasticity behaviors, including the learning rate, update polarity, and plasticity consolidation[17,34,35]. Some neuromodulators, such as adenosine, can affect the actions of synaptic functioning and other modulators in a hierarchical manner[36,37]. This indicates that neuromodulators can be formulized as a special type of meta-learning parameter acting on synaptic plasticity in a weight-sharing manner

  • We reported a spike-based hybrid model that endows spiking neural networks (SNNs) with an efficient synergic learning capability for handling multiple learning scenarios

Read more

Summary

Introduction

There are two principle approaches for learning in artificial intelligence: error-driven global learning and neuroscience-oriented local learning. Despite great progress in understanding biological learning, due to the different learning goals, most of the works fail to fully exploit the advantages of global gradient learning and are generally not good at solving many complex learning problems[24,25]. If neuromorphic hybrid learning models with algorithm-hardware co-design could be developed on neuromorphic platforms, the neuromorphic many-core architecture can be exploited to explore hybrid on-chip computation schemes to obtain better performances in practical learning scenarios

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call