Abstract

Memristor devices are generally suitable for incorporation in neuromorphic systems as synapses because they can be integrated into crossbar array circuits with high area efficiency. In the case of a two-dimensional (2D) crossbar array, however, the size of the array is proportional to the neural network’s depth and the number of its input and output nodes. This means that a 2D crossbar array is not suitable for a deep neural network. On the other hand, synapses that use a memristor with a 3D structure are suitable for implementing a neuromorphic chip for a multi-layered neural network. In this study, we propose a new optimization method for machine learning weight changes that considers the structural characteristics of a 3D vertical resistive random-access memory (VRRAM) structure for the first time. The newly proposed synapse operating principle of the 3D VRRAM structure can simplify the complexity of a neuron circuit. This study investigates the operating principle of 3D VRRAM synapses with comb-shaped word lines and demonstrates that the proposed 3D VRRAM structure will be a promising solution for a high-density neural network hardware system.

Highlights

  • In recent years, neuromorphic computing has emerged as a complementary system to the von Neumann architecture

  • 2D crossbar array synapses are not suitable for the implementation of deep neural networks (DNN) because the chip area depends on both the depth of the neural network and the number of input and output nodes

  • We propose a new optimization method for machine learning weight changes that considers the structural characteristics of 3D vertical resistive random-access memory (VRRAM)

Read more

Summary

Introduction

Neuromorphic computing has emerged as a complementary system to the von Neumann architecture. Much of the research on neural network hardware implementation discusses how to connect large numbers of neurons and synapses. Various memory devices such as static random-access memory, resistive random-access memory (RRAM), floating-gate (FG). Memory, and phase change memory have been implemented as the synapse model in neural network hardware systems [1,2,3,4]. Energy efficiency is a key challenge of neuromorphic computing and RRAM is attractive for large-scale system demonstration due to its relatively lower energy consumption as compared with other synaptic devices [5]. 2D crossbar array synapses are not suitable for the implementation of deep neural networks (DNN) because the chip area depends on both the depth of the neural network and the number of input and output nodes

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.