Abstract

Computing-in-memory (CIM) is a promising approach to improve the throughput and the energy efficiency of deep neural network (DNN) processors. So far, resistive nonvolatile memories have been adapted to build crossbar-based accelerators for DNN inference. However, such structures suffer from several drawbacks such as sneak paths, large ADCs/DACs, high write energy, etc. In this paper we present a mixed signal in-memory hardware accelerator for CNNs. We propose an in-memory inference system that uses FeFETs as the main nonvolatile memory cell. We show how the proposed crossbar unit cell can overcome the aforementioned issues while reducing unit cell size and power consumption. The proposed system decomposes multi-bit operands down to single bit operations. We then re-combine them without any loss of precision using accumulators and shifters within the crossbar and across different crossbars. Simulations demonstrate that we can outperform state-of-the-art efficiencies with 3.28 TOPS/W and can pack 1.64 TOPS in an area of 1.52mm2using 22 nm FDSOI technology,

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call