Abstract

Memristor devices are considered to have the potential to implement unsupervised learning, especially spike timing-dependent plasticity (STDP), in the field of neuromorphic hardware research. In this study, a neuromorphic hardware system for multilayer unsupervised learning was designed, and unsupervised learning was performed with a memristor neural network. We showed that the nonlinear characteristic memristor neural network can be trained by unsupervised learning only with the correlation between inputs and outputs. Moreover, a method to train nonlinear memristor devices in a supervised manner, named guide training, was devised. Memristor devices have a nonlinear characteristic, which makes implementing machine learning algorithms, such as backpropagation, difficult. The guide-training algorithm devised in this paper updates the synaptic weights by only using the correlations between inputs and outputs, and therefore, neither complex mathematical formulas nor computations are required during the training. Thus, it is considered appropriate to train a nonlinear memristor neural network. All training and inference simulations were performed using the designed neuromorphic hardware system. With the system and memristor neural network, the image classification was successfully done using both the Hebbian unsupervised training and guide supervised training methods.

Highlights

  • Neuromorphic hardware research has begun to develop new computing architectures [1–6]

  • In a real on-chip simulation, training has to be conducted with random, nonlinear memristor arrays

  • Unsupervised learning with the Hebbian training method was performed using the proposed neuromorphic hardware system with a nonlinear random memristor artificial neural network (ANN), and it successfully classified images

Read more

Summary

Introduction

Neuromorphic hardware research has begun to develop new computing architectures [1–6]. One focuses on reproducing the exact biological phenomena that occur in the brain [3,6–10], while the other focuses on the development of a new computing device typically known as a neuromorphic chip. Neuromorphic hardware is especially efficient in terms of size and power consumption compared to typical Von Neumann architecture computing devices. The main difference between neuromorphic hardware and Von Neumann computers is the memory structure. The neural cell topology is determined by the connections between neurons (i.e., synaptic connectivity) This means that the biological neural network contains a memory device and a computing unit at the same time. The memory device and computing unit are separated in a typical Von Neumann computer. This power issue appears in extreme forms in recent data-intensive artificial intelligence (AI) applications

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.