Abstract

Despite the promise of superior efficiency and scalability, real‐world deployment of emerging nanoelectronic platforms for brain‐inspired computing have been limited thus far, primarily because of inter‐device variations and intrinsic non‐idealities. In this work, mitigation of these issues is demonstrated by performing learning directly on practical devices through a hardware‐in‐loop approach, utilizing stochastic neurons based on heavy metal/ferromagnetic spin–orbit torque heterostructures. The probabilistic switching and device‐to‐device variability of the fabricated devices of various sizes is characterized to showcase the effect of device dimension on the neuronal dynamics and its consequent impact on network‐level performance. The efficacy of the hardware‐in‐loop scheme is illustrated in a deep learning scenario achieving equivalent software performance. This work paves the way for future large‐scale implementations of neuromorphic hardware and realization of truly autonomous edge‐intelligent devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call