Abstract
From rate to temporal encoding, spiking information processing has demonstrated advantages across diverse neuromorphic applications. In the aspects of data capacity and robustness, multiplexing encoding outperforms alternative encoding schemes. In this work, we aim to implement a new class of multiplexing temporal encoders, patterning stimuli in multiple timescales to improve the information processing capability, and robustness of systems deployed in noisy environments. Benefitted by the internal reference frame using subthreshold membrane oscillation (SMO), the encoded spike patterns are less sensitive to the input noise, increasing the encoder’s robustness. Our design results in a tremendous saving on power consumption and silicon area compared with the power-hungry analog-to-digital converters. Furthermore, a working prototype of the multiplexing temporal encoder built based on an interspike interval (ISI) encoding scheme is implemented on a silicon chip using the standard 180-nm CMOS process. To the best of our knowledge, our introduced encoder demonstrates the first integrated circuit (IC) implementation of neural encoding with multiplexing topology. Finally, the accuracy and efficiency of our design are evaluated through standard machine learning benchmarks, including Modified National Institute of Standards and Technology (MNIST), Canadian Institute For Advanced Research (CIFAR)-10, Street View House Number (SVHN), and spectrum sensing in high-speed communication networks. While our multiplexing temporal encoder demonstrates a higher classification accuracy across all the benchmarks, the power consumption and dissipated energy per spike reach merely <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.6~\mu \text {W}$ </tex-math></inline-formula> and 95 fJ/spike, respectively, with an effective frame rate of 300 MHz. Compared with alternative encoding schemes, our multiplexing temporal encoder achieves at most 100% higher data capacity, 11.4% more accurate in classification, and 25% more robust against noise. Compared with the state-of-the-art designs, our work achieves up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$105 \times $ </tex-math></inline-formula> power efficiency without significantly increasing the silicon area.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.