Abstract

Channel attention is currently widely used in Computer Vision. Most existing channel attention networks are proposed based on Squeeze-and-Excitation Networks (SE- Net),which can obtain excellent performance by designing complex structures, however, they also has more additional network parameters and higher floating point operations per second (FLOPs). We propose a novel lightweight attention structure called Dual Channel Attention Networks (DCA-Net). By introducing the channel attention preprocessing module and using 1-D convolution with K=1, DCA-Net has a more straightforward and delicate structure. We add DCA-Net to ResNet and perform image classification experiments on CIFAR-100 dataset, object detection and instance segmentation experiments on MS-COCO dataset. Experimental results show that our DCA-Net achieves better results than the existing attention networks, such as SE-Net, ECA-Net and so on. For example, in the image classification task on CIFAR-100, the parameter amount of DCA-Net using ResNet-50 decreases by 50.03 compared to SE-Net using ResNet-101, and decreases by 44.48 compared to ECA-Net using ResNet-101, GFLOPs decreased by 48.41 and 48.21, respectively. At the same time, the Top-1 accuracy of DCA-Net using ResNet-50 is 0.3 higher than that of SE-Net using ResNet-101, and 0.86 higher than that of ECA-Net using ResNet-101.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call