Abstract

In-memory computing using analog non-volatile memory (NVM) devices can improve the speed and reduce the latency of deep neural network (DNN) inference. It has been recently shown that neuromorphic crossbar arrays, where each weight is implemented using analog conductance values of phase-change memory devices, achieve competitive accuracy and high power efficiency. However, due to the large amount of NVMs needed and the challenge for making analog NVM devices, these chips typically include some failed devices from fabrication or developed over time. We study the impact of these failed devices on the analog in-memory computing accuracy for various networks. We show that larger networks with fewer reused layers are more tolerable to failed devices. Devices stuck at high resistance states are more tolerable than devices stuck at low resistance states. To improve the robustness of DNNs to defective devices, we develop training methods that add noise and corrupt devices in the weight matrices during network training and show that this can increase the network accuracy in the presence of the failed devices. We also provide estimated maximum defective device tolerance of some common networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call