Abstract

In this era of IoT, energy-efficient and adversarially secure implementation of deep neural networks (DNNs) on hardware has become imperative. Memristive crossbars have emerged as an energy-efficient component of deep learning hardware accelerators due to their compact and efficient matrix-vector multiplication (MVM) implementation. However, they suffer from nonidealities (such as, interconnect parasitics, device variations, and sneak paths) introduced by their circuit topology that degrades computational accuracy. A 1T-1R synapse, adding a transistor (1T) in series with the memristive synapse (1R), has been proposed to mitigate sneak paths in a crossbar. However, we observe that the nonlinear characteristics of the transistor affect the overall conductance of the 1T-1R cell which in turn affects the MVM operation. This 1T-1R nonlinearity arising from the input voltage-dependent nonlinearity is not only difficult to model or formulate, but also causes a drastic performance degradation of DNNs when mapped to such crossbars. In this article, we first analyses the nonlinearity in ideal 1T-1R crossbars (excluding nonidealities, such as device variations and interconnect parasitics) and propose a novel nonlinearity aware training (NEAT) method to address the nonlinearities. Specifically, we first identify the range of network weights, which can be mapped into the 1T-1R cell within the linear operating region of the transistor. After that, we regularize the weights of neural networks to exist within the linear operating range by using an iterative training algorithm. Our iterative training significantly recovers the classification accuracy drop caused by the nonlinearity. Moreover, we find that each layer has a different weight distribution and in turn requires different gate voltage of transistor to guarantee linear operation. Based on this observation, we achieve energy efficiency while preserving classification accuracy by applying heterogeneous gate-voltage control to the 1T-1R cells across different layers. Finally, we conduct various experiments on CIFAR10 and CIFAR100 benchmark datasets to demonstrate the effectiveness of our NEAT. Overall, NEAT yields <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim 20\%$ </tex-math></inline-formula> energy gain with less than 1% accuracy loss (with homogeneous gate control) when mapping ResNet18 networks on 1T-1R crossbars. Thereafter, we integrate the 1T-1R crossbars with various nonidealities. We show that NEAT leads to more adversarially robust mappings of DNNs onto nonideal 1T-1R crossbars than standard DNNs mapped directly onto 1R crossbars. In case of a VGG11 network on CIFAR100 dataset, we obtain <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim 17$ </tex-math></inline-formula> % improvement in clean accuracy and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim 2$ </tex-math></inline-formula> %–8% & <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim 5$ </tex-math></inline-formula> %–6% improvements in adversarial accuracies, respectively, for fast gradient sign method (FGSM) and projected gradient descent (PGD)-based adversarial attacks via NEAT on nonideal 64 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 64 crossbars, in comparison to standard DNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call