Abstract

Memristor crossbar arrays carry out multiply–add operations in parallel in the analog domain which is the dominant operation in a neural network application. On-chip training of memristor neural network systems have the significant advantage of being able to get around device variability and faults. This paper presents a novel technique for on-chip training of multi-layer neural networks implemented using a single crossbar per layer and two memristors per synapse. Using two memristors per synapse provides double the synaptic weight precision when compared to a design that uses only one memristor per synapse. Proposed system utilizes a novel variant of the back-propagation (BP) algorithm to reduce both circuit area and training time. During training, all the memristors in a crossbar are updated in four steps in parallel. We evaluated the training of the proposed system with some nonlinearly separable datasets through detailed SPICE simulations which take crossbar wire resistance and sneak-paths into consideration. The proposed training algorithm trained the nonlinearly separable functions with a slight loss in accuracy compared to training with the traditional BP algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call