Currently, the expansion of the range of tasks solved by neural networks occurs mainly due to the complication of their structure, an increase in the number of neurons and synapses. Networks with thousands of neurons and tens to hundreds of thousands of synapses can achieve impressive results in the areas of speech processing, image identification, computer vision, etc., but at the same time, their use and training require significant computing power and energy costs. This is unacceptable for embedded autonomous systems, which, as follows from current trends, will constitute one of the most important applications for neural networks on a chip and must perform their tasks using a minimum number of elements and with minimal energy consumption. In this article a method for increasing the efficiency of hardware implementation and minimizing the number of electronic synaptic elements of asynchronous spiking neural networks (ASNN) in solving image identification problems was developed. To achieve this goal, a theoretical analysis of ASNN architectural solutions was carried out, the number of electronic synaptic elements was minimized due to the decomposition of the image identification problem, an ASNN software model was developed, neuron parameters were optimized and the neural network was learned (the weights of electronic synaptic elements were sat) on a software model. An ASNN electrical circuit was developed and the results of SPICE modeling were obtained. A hardware implementation of an asynchronous spiking neural network on serial electronic components was performed. The effectiveness of the proposed method was demonstrated in the process of optimizing the parameters of neurons and learning the neural network on the developed software model and confirmed by the results of SPICE modeling of the developed electrical circuit of the ASNN and the results of measuring the signals of the neural network implemented on serial electronic components.
Read full abstract