Oscillatory neural network (ONN) is an emerging neuromorphic architecture composed of oscillators that implement neurons and are coupled by synapses. ONNs exhibit rich dynamics and associative properties, which can be used to solve problems in the analog domain according to the paradigm let physics compute. For example, compact oscillators made of VO2 material are good candidates for building low-power ONN architectures dedicated to AI applications at the edge, like pattern recognition. However, little is known about the ONN scalability and its performance when implemented in hardware. Before deploying ONN, it is necessary to assess its computation time, energy consumption, performance, and accuracy for a given application. Here, we consider a VO2-oscillator as an ONN building block and perform circuit-level simulations to evaluate the ONN performances at the architecture level. Notably, we investigate how the ONN computation time, energy, and memory capacity scale with the number of oscillators. It appears that the ONN energy grows linearly when scaling up the network, making it suitable for large-scale integration at the edge. Furthermore, we investigate the design knobs for minimizing the ONN energy. Assisted by technology computer-aided design (TCAD) simulations, we report on scaling down the dimensions of VO2 devices in crossbar (CB) geometry to decrease the oscillator voltage and energy. We benchmark ONN versus state-of-the-art architectures and observe that the ONN paradigm is a competitive energy-efficient solution for scaled VO2 devices oscillating above 100 MHz. Finally, we present how ONN can efficiently detect edges in images captured on low-power edge devices and compare the results with Sobel and Canny edge detectors.
Read full abstract