Abstract

Resistive crossbar arrays are promising candidates for efficient execution of deep neural network (DNN) inference workloads. The weight matrices of a neural network are mapped to the conductance values on crossbar arrays and then used as vector-matrix multiply engines. Although this mapping seems straightforward, we show that for large scale DNNs the weights must come from a training procedure that accounts for hardware induced constraints, such as ADC, DAC, noise and device fails, for the inference task to run successfully on analog hardware composed of crossbar arrays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call