Abstract

Computing-in-Memory (CIM) is a novel computing architecture that enormously improves energy efficiency and reduces computing latency by avoiding frequent data movement between the computation and memory units. Currently, digital CIM is regarded as more suitable for high-precision operations represented in floating-point arithmetic, as it is not limited by the bit width of ADC/DAC in analog CIM. However, the development of DCIM still faces two problems: On the one hand, mainstream SRAM-based DCIM memory cells introduce large area overheads, which contain at least six transistors per cell. On the other hand, existing DCIM solutions can only support the computing precision up to FP32, failing to meet the demands of high-accuracy application scenarios. To overcome these problems, this work designs a novel SOT-MRAM-based digital CIM macro (MDCIM) with higher area/energy efficiency and achieves double-precision floating-point (FP64) computation with a modified fused multiply–accumulate (FMA) module. The proposed design is synthesized with a 55 nm CMOS technology node, achieving 0.62 mW power consumption, 26.9 GOPS/W, and 0.332 GOPS/mm2 energy efficiency at 150 MHz with 1.08 V supply. Circuit level simulation results show that the MDCIM can achieve higher area utilization compared to previous SRAM-based CIM designs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call