Abstract

SummaryDriven by technologies such as machine learning, artificial intelligence, and internet of things, the energy efficiency and throughput limitations of the von Neumann architecture are becoming more and more serious. As a new type of computer architecture, computing-in-memory is an alternative approach to alleviate the von Neumann bottleneck. Here, we have demonstrated two kinds of computing-in-memory designs based on two-surface-channel MoS2 transistors: symmetrical 4T2R Static Random-Access Memory (SRAM) cell and skewed 3T3R SRAM cell, where the symmetrical SRAM cell can realize in-memory XNOR/XOR computations and the skewed SRAM cell can achieve in-memory NAND/NOR computations. Furthermore, since both the memory and computing units are based on two-surface-channel transistors with high area efficiency, the two proposed computing-in-memory SRAM cells consume fewer transistors, suggesting a potential application in highly area-efficient and multifunctional computing chips.

Highlights

  • In the traditional von Neumann architecture, the memory and computing units are separated from each other (Sebastian et al, 2020; Srinivasa et al, 2018)

  • In the symmetrical computing-in-memory Static Random-Access Memory (SRAM) cell (Figure 1A), the storage nodes Q and Q are both called to perform logic calculations with the external word line voltage to complete the XNOR and XOR operations, where the local logic unit (LLU) consists of two access transistors

  • Computing-in-memory SRAM cell (Figure 1B), only storage node Q is called to calculate with an external input signal to complete the NAND and NOR operations, where the LLU is composed of a two-surfacechannel transistor and a resistor

Read more

Summary

Introduction

In the traditional von Neumann architecture, the memory and computing units are separated from each other (Sebastian et al, 2020; Srinivasa et al, 2018). Under the influence of memory speed and power consumption, it is urgent to design a new architecture of the memory and computing units to achieve a breakthrough in the von Neumann bottleneck (Agrawal et al, 2019; Khaddam-Aljameh et al, 2020; Kim et al, 2020; Liu et al, 2020). As the primary component of cache, Static Random-Access Memory (SRAM) has a similar speed to the processing core, so it is usually integrated on chip to assist the data processing of the central processing unit. The computing-in-memory architecture significantly reduces the frequent data migration between the memory and computing units, the simultaneous integration of memory and processing core on the chip poses new challenges for the complementary metal oxide semiconductor process.

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.