Abstract

In this article, the accuracy of inverter-based memristive neural networks (NNs) for function approximation applications is improved under the presence of process variations. The improvement is achieved by using a design approach, called INTERSTICE (Inverter-based Memristive Neural Networks Dis cretization for Function Approximation Applications), which discretizes the output values by employing a classifier. More precisely, in the INTERSTICE approach, the output range is divided into $K$ subranges where each subrange is considered as a class. To train the classifier, the training samples are labeled where each label shows belonging to a specific class. To evaluate the efficacy of the design technique, some function approximation applications such as BlackScholes, FFT, $K$ -means, and Sobel are considered. Compared to PHAX, a recently published inverter-based memristive NN, INTERSTICE provides lower mean squared error (MSE) values in the presence of memristor and transistor variations. More specifically, the improvements in the mean of MSE ( $\mu _{\mathrm {MSE}}$ ) are in the range of 40%–80% when considering 10% variations in the memristor resistance and transistor parameters. In addition, for most of the benchmarks, INTERSTICE improves the $\mu _{\mathrm {MSE}}$ values of the nominal case (the case where all circuit elements are ideal) compared to PHAX. As another advantage compared to the PHAX, in INTERSTICE, digital outputs can be generated based on the selected classes which eliminates the need for an analog-to-digital converter at the output port connected to the digital part of the system. Finally, achieving lower $\mu _{\mathrm {MSE}}$ values using fewer memristors and consuming lower energy is also attainable with this design approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call