Abstract

Interest in machine learning with Deep Neural Network architectures has exponentially increased since the adoption of Convolution layers and GPUs for faster and larger network sizes. The efficiency of this combination has been proved for many different modalities (speech, video, images, etc.). The next natural progression is to develop dedicated hardware architectures that eventually allows for online learning to happen seamlessly. In this direction, we propose analog multipliers as a candidate for computing the product of the input and weight in the neural network. Since the input and weights are naturally continuous in nature, this class of multipliers is a much better fit than a digital multiplier. Further, subthreshold operation is essential due to the magnitude of Deep networks, and the power consumption will be prohibhitively high when multiple such modules are operating in parallel. We have simulated the schematic and verified the accuracy of the mulitplier for given input signal ranges, both in regular and subthreshold operation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call