Abstract

Compute-in-memory has been increasingly appreciated by researchers as a well-suited hardware accelerator in convolutional neural networks (CNNs), because it can achieve low power consumption and high inference accuracy. This work presents a novel TD-CIM structure using:1) A Capacitor Charging scheme that uses Compact 8T Model for multiply-and-accumulate (MAC) Operations with serials inputs in Time Domain Level; 2) a new replicated bit-line time-domain converter (RBL-TDC) to achieve the quantization of the multiply-accumulate operations with high accuracy; 3) A 22 nm FD-SOI 16 Kb TD-CIM macro fabricated using foundry provided compact 8T-SRAM cells, which achieves normalized energy efficiency(EF) of 5816.5 TOPS/W, normalized area efficiency(64TOPS/mm2), and 8-bit weight for 8-bit serials inputs with 64 accumulations per cycle, as well as output precision(14b) in the MAC operation. This work also obtains an inference accuracy of 92.57 % on the VGG-16 network using the Cifar10 dataset over PVT variations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.