Abstract

Spiking neural networks (SNNs) represent a promising alternative to conventional neural networks. In particular, the so-called Spike-by-Spike (SbS) neural networks provide exceptional noise robustness and reduced complexity. However, deep SbS networks require a memory footprint and a computational cost unsuitable for embedded applications. To address this problem, this work exploits the intrinsic error resilience of neural networks to improve performance and to reduce hardware complexity. More precisely, we design a vector dot-product hardware unit based on approximate computing with configurable quality using hybrid custom floating-point and logarithmic number representation. This approach reduces computational latency, memory footprint, and power dissipation while preserving inference accuracy. To demonstrate our approach, we address a design exploration flow using high-level synthesis and a Xilinx SoC-FPGA. The proposed design reduces 20.5× computational latency and 8× weight memory footprint, with less than 0.5% of accuracy degradation on a handwritten digit recognition task.

Highlights

  • The exponential improvement in computing performance and the availability of large amounts of data are boosting the use of artificial intelligence (AI) applications in our daily lives

  • Artificial neural networks (ANNs) can be classified into three different generations [3]: the first one is represented by the classical McCulloch and Pitts neuron model using discrete binary values as outputs; the second one is represented by more complex architectures as multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) using continuous activation functions; while the third generation is represented by spiking neural networks using spikes as means for information exchange between

  • The AI research is currently dominated by deep neural networks (DNNs) from the second generation, the SNNs belonging to the third generation are receiving considerable attention [3]–[6]

Read more

Summary

Introduction

The exponential improvement in computing performance and the availability of large amounts of data are boosting the use of artificial intelligence (AI) applications in our daily lives. Loihi [9], a SNN developed by Intel, can solve LASSO optimization problems with an over three orders of magnitude better energy-delay product than conventional approaches These advantages are motivating large research programs by major companies (e.g., Intel [9] and IBM [10]) as well as pan-european projects in the domain of spiking networks [4]. SPIKE-BY-SPIKE NEURAL NETWORKS Technically, SbS is a spiking neural network approach based on a generative probabilistic model It iteratively finds an estimate of its input probability distribution p(s) (i.e. the probability of input node s to stochastically send a spike) by its latent variables via r(s) = i h(i)W (s|i). Applying a multiplicative gradient descent method on L, an algorithm for iteratively updating hμ(i) with every observed input spike st could be derived [5]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.