Abstract

Brains perform complex tasks using a fraction of the power that would be required to do the same on a conventional computer. New neuromorphic hardware systems are now becoming widely available that are intended to emulate the more power efficient, highly parallel operation of brains. However, to use these systems in applications, we need “neuromorphic algorithms” that can run on them. Here we develop a spiking neural network model for neuromorphic hardware that uses spike timing-dependent plasticity and lateral inhibition to perform unsupervised clustering. With this model, time-invariant, rate-coded datasets can be mapped into a feature space with a specified resolution, i.e., number of clusters, using exclusively neuromorphic hardware. We developed and tested implementations on the SpiNNaker neuromorphic system and on GPUs using the GeNN framework. We show that our neuromorphic clustering algorithm achieves results comparable to those of conventional clustering algorithms such as self-organizing maps, neural gas or k-means clustering. We then combine it with a previously reported supervised neuromorphic classifier network to demonstrate its practical use as a neuromorphic preprocessing module.

Highlights

  • We demonstrate the operation of the selforganizing network on the GPU-enhanced neuronal network (GeNN) GPU-accelerated simulator and the SpiNNaker neuromorphic hardware system, learning prototpyes of handwritten digits in the MNIST dataset

  • To obtain a visual “pseudo-digit” representation that reflects graphically where the cluster is positioned in feature space, we have used the average weight from each input pixel to each cluster neurons” (CNs) group

  • It is evident how prototypes representing variations of all 10 digits appear in the grid. It clear that the distribution is not completely uniform, and it appears that sparser patterns with less overlap with other digits are somewhat favored over more populated patterns that are more likely to overlap with other digits (e.g., “0”)

Read more

Summary

Introduction

Parallel and power-efficient “neuromorphic platforms,” which mimic these biological mechanisms, have been developed as a basis for brain-like computing in the domain of machine learning and artificial intelligence (AI), for example TrueNorth (Merolla et al 2014), SpiNNaker (Khan et al 2008; Furber et al 2013), Neurogrid (Benjamin et al 2014), Minitaur (Neil and Liu 2014), Loihi (Davies et al 2018), DYNAPs (Moradi et al 2018) and the “BrainScaleS” system (Schemmel et al 2010) These systems support the simulation of up to millions of modeled spiking neurons and billions of synapses in real time, i.e., at the same speed that neurons operate in the brain. The BrainScaleS platform even operates 104 times faster than real time and supports accelerated network simulations

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.