Abstract

Resistive crossbars have shown strong potential as the building blocks of future neural fabrics, due to their ability to natively execute vector-matrix multiplication (the dominant computational kernel in DNNs). However, a key challenge that arises in resistive crossbars is that non-idealities in the synaptic devices, interconnects, and peripheral circuits of resistive crossbars lead to errors in the computations performed. When large-scale DNNs are executed on resistive crossbar systems, these errors compound and result in unacceptable degradation in application-level accuracy. We propose CxDNN, a hardware-software methodology that enables the realization of large-scale DNNs on crossbar systems by compensating for errors due to non-idealities, greatly mitigating the degradation in accuracy. CxDNN is composed of (i) an optimized mapping technique to convert floating-point weights and activations to crossbar conductances and input voltages, (ii) a fast one-time re-training method to recover accuracy loss due to this conversion, and (iii) low-overhead compensation hardware to mitigate dynamic and hardware-instance-specific errors. Unlike previous efforts that are limited to small networks and require the training and deployment of hardware-instance-specific models, CxDNN presents a scalable compensation methodology that can address large DNNs (e.g., ResNet-50 on ImageNet) and maintains the train-once-deploy-anywhere tenet of current DNN application. We evaluated CxDNN on six top DNNs on the ImageNet dataset with 0.5--13.8 million neurons and 0.5--15.5 billion connections. CxDNN achieves 16.9%--49% improvement in the top-1 classification accuracy, effectively mitigating a key challenge to the use of resistive crossbar--based neural fabrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call