Abstract

Memristor crossbar can implement neural network computations in an extremely energy efficient manner. However, resistance variation exists after memristor programming due to fabrication induced process variation. Such resistance variation degrades prediction accuracy of a well-trained network when the network is mapped onto the crossbar. We notice that resistance variation is much smaller when we program the memristor into the higher resistance state (representing logic 0), compared to the one that the memristor is at the lower resistance state (representing logic 1). Such observation motivates us to exploit sparse neural network and propose a two-phase weight mapping and memristor programming scheme to improve prediction accuracy of the network under process variation. In the first phase, the unpruned large value weights are mapped onto the crossbar. Benefited from the large amount of zero value weights in the sparse network, most of the memristors can be programmed into highest resistance state which has good immunity to the variation. In the second phase, we retrain the network to recover a small number of zero value weights to small values. Mapping these small value weights means programming memristors into relatively higher resistance state, thus having good variation resilient and can compensate variations in the mapped large value weights effectively. Experiments are conducted on a neural network deployed on the memristor crossbar. The results demonstrate that the proposed scheme can achieve a similar accuracy to the well-trained software network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call