Abstract

Logic-in-memory (LIM) circuits based on the material implication logic (IMPLY) and resistive random access memory (RRAM) technologies are a candidate solution for the development of ultra-low power non-von Neumann computing architectures. Such architectures could enable the energy-efficient implementation of hardware accelerators for novel edge computing paradigms such as binarized neural networks (BNNs) which rely on the execution of logic operations. In this work, we present the multi-input IMPLY operation implemented on a recently developed smart IMPLY architecture, SIMPLY, which improves the circuit reliability, reduces energy consumption, and breaks the strict design trade-offs of conventional architectures. We show that the generalization of the typical logic schemes used in LIM circuits to multi-input operations strongly reduces the execution time of complex functions needed for BNNs inference tasks (e.g., the 1-bit Full Addition, XNOR, Popcount). The performance of four different RRAM technologies is compared using circuit simulations leveraging a physics-based RRAM compact model. The proposed solution approaches the performance of its CMOS equivalent while bypassing the von Neumann bottleneck, which gives a huge improvement in bit error rate (by a factor of at least 108) and energy-delay product (projected up to a factor of 1010).

Highlights

  • With the number of connected devices in use exceeding 17 billion, the volume of exchanged data rapidly rises

  • Developing LIM hardware accelerators would enable the deployment at the edge of powerful and data-intensive computing paradigms such as binarized neural networks (BNNs) [5,6,7] and hyperdimensional computing [8,9,10], which strongly rely on the energy-efficient execution of logic operations

  • We presented the advantages of the multi-input IMPLY operation performed on (n-), a new LIM edge computing architecture that overcomes all the relevant issues of traditional IMPLY solutions

Read more

Summary

Introduction

With the number of connected devices in use exceeding 17 billion, the volume of exchanged data rapidly rises. From this standpoint, edge computing ensures a decrease in the amount of data to be exchanged, relaxing data transfer and power constraints with obvious benefits for consumer and industrial Internet of Things (IoT), smart cities, artificial intelligence (AI), machine learning, and 5G industry. Developing LIM hardware accelerators would enable the deployment at the edge of powerful and data-intensive computing paradigms such as binarized neural networks (BNNs) [5,6,7] and hyperdimensional computing [8,9,10], which strongly rely on the energy-efficient execution of logic operations. The main showstoppers [12,18]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call