Abstract
Learning binary weights to minimize the difference between target and actual outputs can be considered as a parameter optimization task within the given constraints, and thus, it belongs to the application domain of the Lagrange multiplier method (LMM). Based on the LMM, we propose a novel event-based weight binarization (eWB) algorithm for spiking neural networks (SNNs) with binary synaptic weights (-1, 1). The algorithm features (i) event-based asymptotic weight binarization using local data only, (ii) full compatibility with event-based end-to-end learning algorithms (e.g., event-driven random backpropagation (eRBP) algorithm), and (iii) the capability to address various constraints (including the binary weight constraint). As a proof of concept, we combine eWB with eRBP (eWB-eRBP) to obtain a single algorithm for learning binary weights to generate correct classifications. Fully connected SNNs were trained using eWB-eRBP and achieved an accuracy of 95.35% on MNIST. To the best of our knowledge, this is the first report on completely binary SNNs trained using an event-based learning algorithm. Given that eRBP with full-precision (32-bit) weights exhibited 97.20% accuracy, the binarization comes at the cost of an accuracy reduction of approximately 1.85%. The python code is available online: https://github.com/galactico7/eWB.
Highlights
There has been growing interest in fast, efficient, and compact neuromorphic computing for high-performance processing of large amounts of data for on-chip learning
When combined with an event-based learning algorithm using an appropriate loss function, event-based weight binarization (eWB) enables the network to learn binary weights that minimize the loss function. This was demonstrated using eWB-event-driven random backpropagation (eRBP), which was applied to train fully connected SNNs on MNIST
The consequent classification accuracy is 95.35%, whereas eRBP with 32-bit weights yielded an accuracy of 97.20%
Summary
There has been growing interest in fast, efficient, and compact neuromorphic computing for high-performance processing of large amounts of data for on-chip learning. Spiking neural networks (SNNs) are a promising model for energy-efficient neuromorphic computing [1]–[3] Their energy efficiency is mainly due to the sparse event-based asynchronous data processing and learning weights, as opposed to the case for deep neural networks (DNNs), which utilize errorbackpropagation algorithms (BP) for layer-wise synchronous weight updates in dedicated learning phases [1]. Because most of these event-based algorithms use multi-bit weights, their hardware implementation requires large on-chip memory capacity and intensive computing power, which degrades their energy efficiency. Achieving a competitive classification accuracy commonly requires (i) a large number of trainable parameters, especially those related to hidden neurons, (ii) an inhomogeneous learning framework to consider BP and STDP separately, and (iii) multi-bit weights for output evaluation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.