Abstract

Neuromorphic systems that learn and predict from streaming inputs hold significant promise in pervasive edge computing and its applications. In this paper, a neuromorphic system that processes spatio-temporal information on the edge is proposed. Algorithmically, the system is based on hierarchical temporal memory that inherently offers online learning, resiliency, and fault tolerance. Architecturally, it is a full custom mixed-signal design with an underlying digital communication scheme and analog computational modules. Therefore, the proposed system features reconfigurability, real-time processing, low power consumption, and low-latency processing. The proposed architecture is benchmarked to predict on real-world streaming data. The network's mean absolute percentage error on the mixed-signal system is 1.129X lower compared to its baseline algorithm model. This reduction can be attributed to device non-idealities and probabilistic formation of synaptic connections. We demonstrate that the combined effect of Hebbian learning and network sparsity also plays a major role in extending the overall network lifespan. We also illustrate that the system offers 3.46X reduction in latency and 77.02X reduction in power consumption when compared to a custom CMOS digital design implemented at the same technology node. By employing specific low power techniques, such as clock gating, we observe 161.37X reduction in power consumption.

Highlights

  • O VER the course of the last decade, there has been a profound shift in artificial intelligence (AI) research, where biologically inspired computing systems are being actively studied to address the demand for energy-efficient intelligent devices

  • Given an input dataset of length nn, where each data point presented to the hierarchical temporal memory (HTM) system at time t is represented by yt, while the corresponding predicted value is given by yt, the mean absolute percentage error (MAPE) can be computed as in (9)

  • It can be seen that the initial value of the MAPE is really high, but over time it decreases as the network learns patterns and uses the acquired knowledge to make valid predictions in the future

Read more

Summary

Introduction

O VER the course of the last decade, there has been a profound shift in artificial intelligence (AI) research, where biologically inspired computing systems are being actively studied to address the demand for energy-efficient intelligent devices Inspired systems, such as hierarchical temporal memory (HTM) [1], [2], have demonstrated strong capability in processing spatial and temporal information with a high degree of plasticity while learning models of the world. A GPU can provide the necessary parallelism, but it fails to provide satisfactory performance and demands a large power budget [11] To this end, several research groups have attempted to develop specialized custom hardware designs to run the HTM algorithm

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call