Abstract

The speed of modern digital systems is severely limited by memory latency (the “Memory Wall” problem). Data exchange between Logic and Memory is also responsible for a large part of the system energy consumption. Logic-in-Memory (LiM) represents an attractive solution to this problem. By performing part of the computations directly inside the memory the system speed can be improved while reducing its energy consumption. LiM solutions that offer the major boost in performance are based on the modification of the memory cell. However, what is the cost of such modifications? How do these impact the memory array performance? In this work, this question is addressed by analysing a LiM memory array implementing an algorithm for the maximum/minimum value computation. The memory array is designed at physical level using the FreePDK 45nm CMOS process, with three memory cell variants, and its performance is compared to SRAM and CAM memories. Results highlight that read and write operations performance is worsened but in-memory operations result to be very efficient: a 55.26% reduction in the energy-delay product is measured for the AND operation with respect to the SRAM read one. Therefore, the LiM approach represents a very promising solution for low-density and high-performance memories.

Highlights

  • Modern digital architectures are based on the Von Neumann principle: the system is divided into two main units, a central processing one and a memory

  • The memory array is designed at physical level using the FreePDK 45 nm CMOS process, with three memory cell variants, and its performance is compared to SRAM and CAM memories

  • Companies and researchers are searching for a way to overcome the Memory Wall problem: Logic-in-Memory (LIM), called In-Memory Computing (IMC) [1], is a computing paradigm that is being investigated for this purpose

Read more

Summary

Introduction

Modern digital architectures are based on the Von Neumann principle: the system is divided into two main units, a central processing one and a memory. The CPU extracts the data from the memory, elaborates them and writes the results back This structure represents the main performance bottleneck of modern computing systems: memories are not able to supply data to CPUs at a speed similar to the processing one, limiting the throughput of the whole system; high-speed data exchange between CPU and memory leads to large power consumption. A complex memory hierarchy is employed to partially compensate for this, but it does not completely solve it: the system results to be still limited by the impossibility to have a memory that is large and very fast at the same time For these reasons, companies and researchers are searching for a way to overcome the Memory Wall problem: Logic-in-Memory (LIM), called In-Memory Computing (IMC) [1], is a computing paradigm that is being investigated for this purpose. The rate at which data are exchanged between CPU and memory is reduced, resulting in power consumption reduction

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.