Abstract

Disordered nanoclusters with multielectrode input–output functionality had recently been experimentally realized with energy-efficient and emergent computational capacity, and thus an interconnected network of several such nanoclusters had been proposed to realize artificial neural networks. To aid that end, here we show that nanocluster functionality can be fit to the simplest dendritic neuron model (DNM), where the only form of nonlinearity is due to multiplicative interactions. This work brings into the spotlight higher-order neural networks (known for their efficient encoding of geometric invariances) to serve as an explainable baseline model of nano-networks against which experimentalists can compare more sophisticated models (deep neural networks or physics-based models such as the lin-min network introduced here) and provides ground for designing novel approximate hardware and a statistical mechanics analysis of the learning performance of interconnected nanoclusters versus perceptrons (where neurons output a nonlinear function of the weighted sum of their inputs). A network with just ten higher-order neurons is shown to achieve a classification accuracy of more than 96% on the MNIST benchmark for handwritten digit recognition (which required 100 times more neurons in three-layer perceptrons).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.