Abstract

Analog computing is a promising approach to improve the silicon efficiency for inference accelerators in extremely resource-constrained environments. Existing analog circuit proposals for neural networks, however, fall short of realizing the full potential of analog computing because they implement linear synapses, leading to circuits that are either area inefficient or vulnerable to process variation. In this paper, we first present a novel nonlinear analog synapse circuit design that is dense and inherently less sensitive to process variation. We then propose an interpolation-based methodology to train nonlinear synapses built with deep-submicrometer transistors. Our analog neural network achieves a 29× and 582× improvement in computational density relative to state-of-the-art digital and analog inference accelerators, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call