Abstract

Computing paradigms in biology have inspired an array of different methods for realizing new types of computing machines, of which neuromorphic systems are the most explored. These massively parallel architectures consist of computing elements (neurons) and adaptive memory (synapses), located in close proximity. Such stochastic systems exploit the use of low energy asynchronous transmission of information through spikes to realize highly complex calculations. Unlike the traditional von Neumann architectures, neuromorphic systems are also highly resilient to variability, which has led researchers to explore the use of nanodevices as synapses in hardware implementations of such artificial systems. The artificial neuron circuits, however, are typically realized in CMOS technology. While CMOS is convenient and the implementation is straightforward, is the relatively high energy cost of fabrication and the limited resources it entails truly necessary for artificial neurons? To explore this question, we theoretically consider how the behaviors of two different types of neurons are impacted by device variations and the circuit characteristics of organic transistors. We first consider neurons in a supervised learning scheme based on a perceptron-like architecture (Figure 1a), which forms the basis of many artificial neural networks used in software implementations. The goal of is to classify a given data set into N different categories. The data is input into a cross bar array of synapses: the columns of the array are the different categories possible and the rows are the features of the data. The sum of each column is put into a winner take all (WTA) neuron, which choses the maximum conductance and declares that category to be the winner. Training proceeds one sample at a time, the synapses are assumed to follow a typical hardware delta rule, where the sign difference between the expected and known answers is used to augment or decrease the weights by a very small delta. To make the architecture physically realizable, we assume that the synapses can access 256 levels between a minimum and a maximum conductance[1]. While researchers have explored how the variability of synapses effects performance[2], the variability of the WTA neuron is what we are most interested in here. A typical WTA circuit would consist of N current-controlled current conveyors (2N transistors)[3], which are likely to exhibit large variabilities. To model this, we randomly vary the output of the WTA neuron up to 50% and explore how this changes the classification result on two databases: 1) the hand written digit classification database (MNIST)[4] and 2) a sleep stage classification[5] based on the power spectral density of electroencephalography. In Figure 1b and 1c we plot the classification results for variability in neurons and synapses for each database. We find that the variability in neurons fares much better than device variations in synapses and that indeed realizing neurons in organic electronic technologies should indeed work well despite the significant variability that can arise. We also explored an artificial spiking neuron circuit inspired by the biophysical Morris-Lecar model neuron. In this case, we include the parameters for the organic neuron transistors[6] from the literature and using plausible models for its operation [7]. We show that the neuron different excitabilities can still be obtained and that the spiking rates still sufficient to permit the realization of more complex architectures. Acknowledgements: This research was funded in part by the CNRS INGVERT grant AGREEE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call