Abstract

Deep neural networks have achieved considerable success over the past ten years in a variety of fields. However, current state-of-the-art artificial intelligence (AI) systems require large computing hardware infrastructure and high power consumption. To overcome these hurdles, it is required to adopt new strategies such as designing novel computation architectures and developing building blocks that can mimic the low energy consumption of biological systems. On the architecture level, implementing classification tasks by splitting the problem into simpler subtasks is a way to relax hardware constraints despite the less accuracy of the approach. On the computation unit level, memristive devices are a promising technology for low power neuromorphic computation. Hereby, we combine both these two approaches and present a novel algorithmic approach for multiclass classification tasks through splitting the problem into binary subtasks while using optoelectronics memristors as synapses. Our approach leverages the core fundamentals from the One-vs-One (OvO) and the One-vs-Rest (OvR) classification strategies towards a novel Outcome-Driven One-vs-One (ODOvO) approach. The light modulation of synaptic weights, fed in our algorithm from experimental data, is a key enabling parameter that permits classification without modifying further applied electrical biases. Our approach requires at least a 10X less synapses (only 196 synapses are required) while reduces the classification time by up to N2\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\frac{N}{2}$$\\end{document} compared to conventional memristors. We show that the novel ODOvO algorithm has similar accuracies to OvO (reaching over 60% on the MNIST dataset) while requiring even fewer iterations compared to the OvR. Consequently, our approach constitutes a feasible solution for neural networks where key priorities are the minimum energy consumption i.e., small iterations number, fast execution, and the low hardware requirements allowing experimental verification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call