Abstract

Training and recognition with neural networks generally require high throughput, high energy efficiency, and scalable circuits to enable artificial intelligence tasks to be operated at the edge, i.e., in battery-powered portable devices and other limited-energy environments. In this scenario, scalable resistive memories have been proposed as artificial synapses thanks to their scalability, reconfigurability, and high-energy efficiency, and thanks to the ability to perform analog computation by physical laws in hardware. In this work, we study the material, device, and architecture aspects of resistive switching memory (RRAM) devices for implementing a 2-layer neural network for pattern recognition. First, various RRAM processes are screened in view of the device window, analog storage, and reliability. Then, synaptic weights are stored with 5-level precision in a 4 kbit array of RRAM devices to classify the Modified National Institute of Standards and Technology (MNIST) dataset. Finally, classification performance of a 2-layer neural network is tested before and after an annealing experiment by using experimental values of conductance stored into the array, and a simulation-based analysis of inference accuracy for arrays of increasing size is presented. Our work supports material-based development of RRAM synapses for novel neural networks with high accuracy and low-power consumption.

Highlights

  • In recent years, artificial intelligence (AI) has achieved excellent performance in tasks such as machine translation, face recognition, and speech recognition which are essential applications for big data analysis in cloud computing

  • The emergence of -designed computing machines, such as the graphics processing unit (GPU)[2] and the tensor processing unit (TPU),[3] capable of significantly speeding up the network training, enabled deep neural networks (DNNs) to outperform the human ability in classifying images[4] or playing Go.[5]

  • The first level, which is called L1, corresponds to the high resistance state (HRS) and was achieved by a programming scheme under incremental step pulse with verify algorithm (ISPVA) approach consisting of the application of sequential reset pulses with increasing amplitude from 0 to 3 V at source terminal of cell selectors with grounded drain, gate terminal biased at 2.7 V, and a threshold current Ith = 5 μA [Fig. 8(a)]

Read more

Summary

INTRODUCTION

Artificial intelligence (AI) has achieved excellent performance in tasks such as machine translation, face recognition, and speech recognition which are essential applications for big data analysis in cloud computing To carry out these machine learning tasks, deep neural networks (DNNs) are massively trained in software by using very large datasets.[1] In particular, the emergence of -designed computing machines, such as the graphics processing unit (GPU)[2] and the tensor processing unit (TPU),[3] capable of significantly speeding up the network training, enabled DNNs to outperform the human ability in classifying images[4] or playing Go.[5] the training of DNNs generally requires an extensive amount of time and energy, mostly contributed by the intensive data transfer from the memory to the processing unit, where the feedforward propagation, the backpropagation, and the weight update are executed. Networks with improved size, decreasing variability and increasing number of levels, were calculated to support inference machine development with the best tradeoff between good accuracy and low-power consumption for edge computing applications

HfO2 RRAM DEVICES
ELECTRICAL RRAM CHARACTERISTICS
MULTILEVEL PROGRAMMING OF HfAlO RRAM DEVICES
NEURAL NETWORK FOR INFERENCE DEMONSTRATION
Findings
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.