Abstract

In the field of neurology, neurons compete for brain attention in winner-take-all (WTA) competitions. Inspired by this, we propose a new WTA-based attention network (called ANW), which can be extended to general neural networks. The ANW network simulates WTA behavior in biological neurons; that is, the winner neuron itself becomes excited and sends its own spike signal. Furthermore, the winner will output an inhibition signal, which will be input into the cell synapses of other neurons. The ANW network regards the update of neuron weights after backpropagation as a stimulus input to neurons. Combined with the WTA function, the neural network will continue to update until it reaches a stable state. In this way, the ANW network can find the most active neurons in the network and generate the winners among these neurons to improve the ability of the neural network to extract features. As the ANW is a small and universal network, it can be seamlessly inserted into any neural network architecture, with negligible overhead, and can be trained together with a basic CNN. We verified the neural network with ANW by using the ImageNet-100, CIFAR-100 and MS COCO detection datasets for the experiments. The experimental results show that, compared with those of the baseline network, the top-1 err gains of 0.63% and 3.85% can be achieved for ANW, with classical ResNet-50 as the backbone network, respectively, demonstrating that the method can improve the performance of the model in image classification and target detection with wide applicability. The proposed method provides an effective, and almost consumption-neutral solution, for performance improvement in the field of computer vision target detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call