Abstract

In this study, we analyze the impact of drain current ( $$ I_{{{\text{DS}}}} $$ ) variation in 28 nm high-K metal-gate and 22 nm fully-depleted silicon-on-insulator Ferroelectric FET devices on processing-in-memory (PIM) deep neural network (DNN) accelerators. When performing repeated read operations on several devices at various read frequencies and under various biasing and programming conditions, non-Normal variation in $$ I_{{{\text{DS}}}} $$ is observed. Device-circuit co-analysis is used to emulate PIM performance subject to noise when classifying images. Marginal degradation is observed in Fashion-MNIST classification accuracy using LeNet-5, and more significant degradation is observed in CIFAR-10 classification accuracy using MobileNetV2. Variation-aware training is shown to fully recover minor drops in LeNet-5 accuracy but becomes difficult for large workloads like MobileNetV2. We demonstrate that $$ I_{{{\text{DS}}}} $$ variation in individual FeFETs over many read cycles is not prohibitive to designing DNN accelerators with small workloads, but advanced design techniques are required to mitigate error for larger workloads.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call