Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depend heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons (MS-Ns) are prone to noise, variability, and mismatch but can lead to extremely low-power designs. In this paper, we will analyze, compare, and contrast existing neuron architectures with a proposed MS-N in terms of performance, power, and noise, thereby demonstrating the applicability of the proposed MS-N for achieving extreme energy efficiency (femtojoule/multiply and accumulate or less). The proposed MS-N is implemented in 65-nm CMOS technology and exhibits $> 100\times $ better energy efficiency across all frequencies over two traditional Dig-Ns synthesized in the same technology node. We also demonstrate that the inherent error resiliency of a fully connected or even convolutional neural network can handle the noise as well as the manufacturing nonidealities of the MS-N up to certain degrees. Notably, a system-level implementation on CIFAR-10 data set exhibits a worst case increase in classification error by 2.1% when the integrated noise power in the bandwidth is $\sim 0.1 ~\mu $ V2, along with $\pm 3\sigma $ amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.
Read full abstract