Abstract

Neuromorphic Nanowire Networks (NWNs) are a novel class of information processing hardware devices, combining the advantage of memristive cross-point junctions and neural-like complex network topology. In addition to their low operating power, NWNs are also easy to fabricate with bottom-up self-assembly. Here, we implement the MNIST handwritten digit classification task on simulated NWNs to demonstrate their ability to perform complex learning tasks. Using a CNN-inspired kernel method and a reservoir computing framework, our simulation results attain an accuracy of nearly 98%. Moreover, this is achieved using only a fraction of the total MNIST training data and without requiring hardware accelerators. We also investigate the information theoretic metrics of mutual information (MI), transfer entropy (TE) and active information storage (AIS) to analyze the MNIST learning dynamics of NWNs. We find that MI with respect to classes is maximized after network feature extraction and that TE is largest when MNIST digit boundaries are processed, while AIS is strongest when areas with lower pixel values are presented. Overall, these results suggest the information processing capabilities of neuromorphic NWNs make them promising candidates for complex learning applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call