Abstract

In-Memory Computing (IMC) is emerging as a new paradigm to address the von-Neumann bottleneck (VNB) in data-intensive applications. In this paper, an energy-efficient 10T SRAM-based IMC macro architecture is proposed to perform logic, arithmetic, and In-memory Dot Product (IMDP) operations. The average write margin and read margins of the proposed 10T SRAM are improved by 40% and 2.5%, respectively, compared to the 9T SRAM. The write energy and leakage power of the proposed 10T SRAM are reduced by 89% and 83.8%, respectively, with aproximatelly similar read energy compared to 9T SRAM. Additionally, a 4 Kb SRAM array based on 10T SRAM is implemented in 180-nm SCL technology to analyze the operation and performance of the proposed IMC macro architecture. The proposed IMC architecture achieves an energy efficiency of 5.3 TOPS/W for 1-bit logic, 4.1 TOPS/W for 1-bit addition, and 3.1 TOPS/W for IMDP operations at 1.8 V and 60 MHz. The area efficiency of 65.2% is achieved for a 136 × 32 array of proposed IMC macro architecture. Further, the proposed IMC macro is also tested for accelerating the IMDP operation of neural networks by importing linearity variation analysis in Tensorflow for image classification on MNIST and CIFAR datasets. According to Monte-Carlo simulations, the IMDP operation has a standard deviation of 0.07 percent in accumulation, equating to a classification accuracy of 97.02% on the MNIST dataset and 88.39% on the CIFAR dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call