Abstract

In this work, the requirement of using high‐precision (HP) signals is lifted and the circuits for implementing deep learning algorithms in memristor‐based hardware are simplified. The use of HP signals is required by the backpropagation learning algorithm since the gradient descent learning rule relies on the chain product of partial derivatives. However, it is both challenging and biologically implausible to implement such an HP algorithm in noisy and analog memristor‐based hardware systems. Herein, it is demonstrated that the requirement for HP signals handling is not necessary and more efficient deep learning can be achieved when using a binary stochastic learning algorithm. The new algorithm proposed in this work modifies elementary neural network operations, which improves energy efficiency by two orders of magnitude compared to traditional memristor‐based hardware and three orders of magnitude compared to complementary metal–oxide–semiconductor‐based hardware. It also provides better accuracy in pattern recognition tasks than the HP learning algorithm benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call