Abstract

In-memory computing with analog non-volatile memories (NVMs) can accelerate both the in-situ training and inference of deep neural networks (DNNs) by parallelizing multiply-accumulate (MAC) operations in the analog domain. However, the in-situ training accuracy suffers from unacceptable degradation due to undesired weight-update asymmetry/nonlinearity and limited bit precision. In this work, we overcome this challenge by introducing a compact Ferroelectric FET (FeFET) based synaptic cell that exploits hybrid precision for in-situ training and inference. We propose a novel hybrid approach where we use modulated “volatile” gate voltage of FeFET to represent the least significant bits (LSBs) for symmetric/linear update during training only, and use “non-volatile” polarization states of FeFET to hold the information of most significant bits (MSBs) for inference. This design is demonstrated by the experimentally validated FeFET SPICE model and cosimulation with the TensorFlow framework. The results show that with the proposed 6-bit and 7-bit synapse design, the insitu training accuracy can achieve ~97.3% on MNIST dataset and ~87% on CIFAR-10 dataset, respectively, approaching the ideal software based training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call