Abstract

Increasing the representation power of convolutional neural networks, a popular deep learning model, is one of the hot study topics recently. Channel attention is a common strategy followed in this regard. In this strategy, the inter-channel relationship is exploited by a module placed after the convolution operation. Recently, successful channel attention modules are proposed in this context. In this article, a performance analysis of three popular channel attention structures which are Squeeze-and-Excitation Networks (SeNet), Efficient Channel Attention Networks (Eca-Net), and Convolutional Block Attention Module (CBAM), is performed using five different image datasets for the classification task. According to the obtained results, SeNet is the most successful channel attention module surpassing the other’s performance in the majority of the experiments. In experiments with the ResNet18 and ResNet34 base models, the SeNet module showed the highest performance in three of the five datasets. For the ResNet50 baseline, SeNet was the channel attention module with the highest accuracy values for all datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call