Abstract

Binarized Neural Networks (BNN) significantly reduce computational complexity and relax memory requirements with binarized weights and activations. We propose a differential crosspoint (DX) memristor array for enabling parallel multiply-and-accumulate (MAC) operations in BNN to further improve the efficiency. Two differential memristors compose one synapse. The synapses on the same column form a voltage divider in which the output voltage corresponds linearly to the digital summation. The analog output voltage is then quantized to 4-bit output by a voltage sense amplifier. A small 64×64 DX array in every DX unit (DXU) minimizes parasitic resistance and capacitance for quicker MAC operations. A system architecture using DXUs for BNN acceleration is introduced. A wide range of BNN models can be mapped to an array of DXUs. To further reduce the energy spent on data movement, a neighbor shifting scheme increases the input data reusability. The effects of quantization and bit errors are investigated by running MNIST and CFAR-10 datasets. A DXU is able to achieve an estimated energy efficiency of 160 TMAC/s/W.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.