Abstract

Memristors offer great advantages as a new hardware solution for neuromorphic computing due to their fast and energy‐efficient matrix vector multiplication. However, the nonlinear weight updating property of memristors makes it difficult to be trained in a neural network learning process. Several compensation schemes have been proposed to mitigate the updating error caused by nonlinearity; nevertheless, they usually involve complex peripheral circuits design. Herein, stochastic and adaptive learning methods for weight updating are developed, in which the inaccuracy caused by the memristor nonlinearity can be effectively suppressed. In addition, compared with the traditional nonlinear stochastic gradient descent (SGD) updating algorithm or the piecewise linear (PL) method, which are most often used in memristor neural network, the design is more hardware friendly and energy efficient without the consideration of pulse numbers, duration, and directions. Effectiveness of the proposed method is investigated on the training of LeNet‐5 convolutional neural network. High accuracy, about 93.88%, on the Modified National Institute of Standards and Technology handwriting digits datasets is achieved (with typical memristor nonlinearity as ±1), which is close to the network with complex PL method (94.7%) and is higher than the original nonlinear SGD method (90.14%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call