Abstract

In this work, the effect of randomly distributed stuck-at faults (SAFs) in memristive cross-point array (CPA)-based single and multi-layer perceptrons (SLPs and MLPs, respectively) intended for pattern recognition tasks is investigated by means of realistic SPICE simulations. The quasi-static memdiode model (QMM) is considered here for the modelling of the synaptic weights implemented with memristors. Following the standard memristive approach, the QMM comprises two coupled equations, one for the electron transport based on the double-diode equation with a single series resistance and a second equation for the internal memory state of the device based on the so-called logistic hysteron. By modifying the state parameter in the current-voltage characteristic, SAFs of different severeness are simulated and the final outcome is analysed. Supervised ex-situ training and two well-known image datasets involving hand-written digits and human faces are employed to assess the inference accuracy of the SLP as a function of the faulty device ratio. The roles played by the memristor’s electrical parameters, line resistance, mapping strategy, image pixelation, and fault type (stuck-at-ON or stuck-at-OFF) on the CPA performance are statistically analysed following a Monte-Carlo approach. Three different re-mapping schemes to help mitigate the effect of the SAFs in the SLP inference phase are thoroughly investigated.

Highlights

  • Artificial neural networks (ANNs) have demonstrated outstanding results in the field of pattern recognition [1]

  • A much simpler approach than the more complex Resistive random access memory (RRAM)-based ANNs explored in the literature (MLPs, [43,44,45], convolutional neural networks [46] (CNNs), spike neural networks [47] (SNNs), etc.), SLPs still allow us to study and clarify the limitations of ANNs caused by parasitic effects and non-idealities occurring in the synaptic layers implemented with crosspoint array (CPA)

  • These were studied within the framework of CPA-based SLP creation, training, and SPICE simulation presented in Supplementary Figure S2, together with a simplified schematic representation of the generated SLP circuit

Read more

Summary

Introduction

Artificial neural networks (ANNs) have demonstrated outstanding results in the field of pattern recognition [1]. In this particular domain, the matrix-vector multiplication (MVM) method plays a key role, being the most computationally expensive operation during the classification phase. Resistive random access memory (RRAM) or memristor-based cross-point Arrays [3,4,5,6] (CPA, see Figure 1a) have demonstrated enormous potential in boosting the speed and energy efficiency of next-generation computing systems [7]. The CPA structure can be scaled down to 4F2, F being the feature size of the technology node [8], which enables the large-scale integration of memory units

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call