Abstract

Vision Transformer has garnered outstanding performance in visual tasks due to its capability for global modeling of image information. However, during the self-attention computation of image tokens, a common issue of attention map homogenization arises, impacting the final performance of the model as attention maps propagate through feature maps layer by layer. In this research, we propose a token-based approach to adjust the output of attention sub-layer, focusing on the feature dimensions, to address the homogenization problem. Furthermore, different network architectures exhibit variations in their approaches to modeling image features. Specifically, Vision Transformers excel at modeling long-range relationships, while convolutional neural networks possess local receptive fields. Therefore, this paper introduces a plug-and-play convolutional operator-based component, integrated into the Vision Transformer, to validate the impact of structural enhancements on model performance. Experimental results on image recognition and adversarial attack tasks respectively demonstrate the effectiveness and robustness of the two proposed methods. Additionally, the analysis of information entropy on the feature maps of the model’s final layer indicates that the improved model exhibits higher information richness, making it more conducive to the classifier’s discriminative capabilities.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.