Abstract

In this paper, we propose an on-chip learning method that can overcome the poor characteristics of pre-developed practical synaptic devices, thereby increasing the accuracy of the neural network based on the neuromorphic system. The fabricated synaptic devices, based on Pr1−xCaxMnO3, LiCoO2, and TiOx, inherently suffer from undesirable characteristics, such as nonlinearity, discontinuities, and asymmetric conductance responses, which degrade the neuromorphic system performance. To address these limitations, we have proposed a conductance-based linear weighted quantization method, which controls conductance changes, and trained a neural network to predict the handwritten digits from the standard database MNIST. Furthermore, we quantitatively considered the non-ideal case, to ensure reliability by limiting the conductance level to that which synaptic devices can practically accept. Based on this proposed learning method, we significantly improved the neuromorphic system, without any hardware modifications to the synaptic devices or neuromorphic systems. Thus, the results emphatically show that, even for devices with poor synaptic characteristics, the neuromorphic system performance can be improved.

Highlights

  • Deep learning has been implemented in systems central processing units (CPUs) and graphics processing units (GPUs), and has performed successfully in most fields that utilize an artificial neural network (ANN) [1,2]

  • Proposed learning techniques such as activation functions and threshold weight update scheme, and has a different perspective from the proposed method. Considering these challenges, we have proposed a new method that can improve the performance of the entire neuromorphic system by quantizing the conductance used for learning in a synapse-based neural network (NN)

  • If we use the structure of convolutional neurons that can extract spatial characteristics more efficiently than fully connected (FC) neurons, batch normalization that leads to efficient learning results by controlling the distribution of results by layer, dropout that randomly removes neurons, and learning techniques such as weight decay, we would have better learning and inferring performance

Read more

Summary

Introduction

Deep learning has been implemented in systems central processing units (CPUs) and graphics processing units (GPUs), and has performed successfully in most fields that utilize an artificial neural network (ANN) [1,2]. Neuromorphic systems have attracted attention recently as an alternative to replace the Von-Neumann architecture [5,6,7]. Various types of synaptic devices have been researched for implementing these neuromorphic system [8,9,10,11,12]. Studied in-memory computing based on Re-RAM after analyzing the structure of Von Neumann [13]. Improving the undesirable synaptic characteristics remains a significant challenge, as nonlinear, discontinuous, and asymmetric conductance changes of potentiation and depression produce critical failures in on-chip learning performance [15,16]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.