Abstract

Oscillatory Neural Networks (ONNs) are currently arousing interest in the research community for their potential to implement very fast, ultra-low-power computing tasks by exploiting specific emerging technologies. From the architectural point of view, ONNs are based on the synchronization of oscillatory neurons in cognitive processing, as occurs in the human brain. As emerging technologies, VO2 and memristive devices show promising potential for the efficient implementation of ONNs. Abundant literature is now becoming available pertaining to the study and building of ONNs based on VO2 devices and resistive coupling, such as memristors. One drawback of direct resistive coupling is that physical resistances cannot be negative, but from the architectural and computational perspective this would be a powerful advantage when interconnecting weights in ONNs. Here we solve the problem by proposing a hardware implementation technique based on differential oscillatory neurons for ONNs (DONNs) with VO2-based oscillators and memristor-bridge circuits. Each differential oscillatory neuron is made of a pair of VO2 oscillators operating in anti-phase. This way, the neurons provide a pair of differential output signals in opposite phase. The memristor-bridge circuit is used as an adjustable coupling function that is compatible with differential structures and capable of providing both positive and negative weights. By combining differential oscillatory neurons and memristor-bridge circuits, we propose the hardware implementation of a fully connected differential ONN (DONN) and use it as an associative memory. The standard Hebbian rule is used for training, and the weights are then mapped to the memristor-bridge circuit through a proposed mapping rule. The paper also introduces some functional and hardware specifications to evaluate the design. Evaluation is performed by circuit-level electrical simulations and shows that the retrieval accuracy of the proposed design is comparable to that of classic Hopfield Neural Networks.

Highlights

  • IntroductionBrain efficiency in cognitive processing relies on an architecture made up of distributed processors (neurons) and memories (synapses)

  • Brain efficiency in cognitive processing relies on an architecture made up of distributed processors and memories

  • The diffusive coupling function is implemented by connecting the oscillatory neuron outputs via resistors, creating what is known as resistively coupled oscillatory neurons (Corti et al, 2020a)

Read more

Summary

Introduction

Brain efficiency in cognitive processing relies on an architecture made up of distributed processors (neurons) and memories (synapses). The devices employed play a significant role in power and area efficiency. In this regard, emerging low power and compact devices constitute alternative resources useful for developing efficient cognitive processors. In classic ANNs, neurons are the simplest biological computing units that accumulate weighted input values prior to the application of an activation function to obtain the output value. In these neural networks, data representation is based on binary or real numbers. A coupling function determines how one oscillator will affect another

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call