Abstract

We demonstrate extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in synaptic weight can be energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating precision synaptic weights. Specifically, voltage-controlled domain wall (DW) devices demonstrate stochastic behavior and can only encode limited states; however, they are extremely energy efficient during both training and inference. In this study, we propose both in-situ and ex-situ training algorithms, based on modification of the algorithm proposed by Hubara et al. [1] which works well with quantization of synaptic weights, and train several 5-layer DNNs on MNIST dataset using 2-, 3- and 5-state DW devices as a synapse. For in-situ training, a separate high precision memory unit preserves and accumulates the weight gradients which prevents the accuracy loss due to weight quantization. For ex-situ training, a precursor DNN is first trained based on weight quantization and characterized DW device model. Moreover, a noise tolerance margin is included in both of the training methods to account for the intrinsic device noise. The highest inference accuracies we obtain after in-situ and ex-situ training are &#x007E; 96.67% and &#x007E;96.63% which is very close to the baseline accuracy of &#x007E;97.1% obtained from a similar topology DNN having floating precision weights with no stochasticity. Large inter-state interval due to quantized weights and noise tolerance margin enables in-situ training with significantly lower number of programming attempts. Our proposed approach demonstrates a possibility of at least <i>two orders of magnitude</i> energy savings compared to the floating-point approach implemented in CMOS. This approach is specifically attractive for low power intelligent edge devices where the ex-situ learning can be utilized for energy efficient non-adaptive tasks and the in-situ learning can provide the opportunity to adapt and learn in a dynamically evolving environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call