Abstract

This paper proposes a unique hardware architecture for a self-organizing map (SOM) that mimics the biological brain by using pulse mode operation. In the proposed SOM, vector elements are given as in the form of frequency modulated signals, and digital frequency-locked loops (DFLLs) in neurons handle the computations of the vector elements. The SOM is trained by unsupervised learning, where the winner neuron that has the nearest weight vector is found first. In the proposed SOM, the winner neuron is found by counting cycle slips between the signals that carry input and weight vectors. After the winner neuron is found, weight vectors selected by a neighborhood function are updated toward the input vector. Triangular neighborhood function that is implemented by using an attenuating enable signal for the DFLLs, is employed. To evaluate the proposed SOM and its building components, VHDL simulations and experiments using an FPGA were conducted. Compared to the previous work, the operation speed and learning capability were significantly improved. Novelty of the proposed architecture is it uniquely uses a pulse-based operation that mimics the biological brain, and it was verified that unsupervised learning can be realized with neurons communicating with each other using frequency modulated pulse signals.

Highlights

  • T HE self-organizing map (SOM) that was proposed by Kohonen [1] is a special type of artificial neural network (ANN)

  • This paper proposes a new digital frequency-locked loops (DFLLs)-based hardware SOM architecture that mimics the biological brain by using pulse mode operation

  • The building components discussed in the previous section and a hardware SOM containing 16 × 16 neurons were described by VHDL

Read more

Summary

Introduction

T HE self-organizing map (SOM) that was proposed by Kohonen [1] is a special type of artificial neural network (ANN). The SOM, which is trained by an unsupervised learning algorithm, performs a nonlinear mapping from a given high-dimensional input vector space to a lower-dimensional map of neurons, and it has been used to visualize, interpret, and classify large high-dimensional data. Substantial parallelism is found in the algorithms of ANNs including the SOM. A parallel hardware architecture is a suitable platform for implementing the ANNs and SOM. Many researchers have been developing VLSI implementations of the neural networks using various. Manuscript received May 8, 2020; revised August 5, 2020 and September 15, 2020; accepted December 10, 2020. Date of publication January 12, 2021; date of current version February 23, 2021.

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.