Abstract

Vision Transformer has garnered outstanding performance in visual tasks due to its capability for global modeling of image information. However, during the self-attention computation of image tokens, a common issue of attention map homogenization arises, impacting the final performance of the model as attention maps propagate through feature maps layer by layer. In this research, we propose a token-based approach to adjust the output of attention sub-layer, focusing on the feature dimensions, to address the homogenization problem. Furthermore, different network architectures exhibit variations in their approaches to modeling image features. Specifically, Vision Transformers excel at modeling long-range relationships, while convolutional neural networks possess local receptive fields. Therefore, this paper introduces a plug-and-play convolutional operator-based component, integrated into the Vision Transformer, to validate the impact of structural enhancements on model performance. Experimental results on image recognition and adversarial attack tasks respectively demonstrate the effectiveness and robustness of the two proposed methods. Additionally, the analysis of information entropy on the feature maps of the model’s final layer indicates that the improved model exhibits higher information richness, making it more conducive to the classifier’s discriminative capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call