Abstract

Recently, several Vision Transformer (ViT) based methods have been proposed for Fine-Grained Visual Classification (FGVC). These methods significantly surpass existing CNN-based ones, demonstrating the effectiveness of ViT in FGVC tasks. However, there are some limitations when applying ViT directly to FGVC. First, ViT needs to split images into patches and calculate the attention of every pair, which may result in heavy noise calculation during the training phase and unsatisfying performance when handling fine-grained images with complex backgrounds and small objects. Second, complementary information is important for FGVC, but a standard ViT works by using the class token in the final layer for classification which is not enough to extract comprehensive fine-grained information at different levels. Third, the class token fuses the information of all patches in the same manner, in other words, the class token treats each patch equally. However, the discriminative parts should be more critical. To address these issues, we propose ACC-ViT including three novel components, i.e., Attention Patch Combination (APC), Critical Regions Filter (CRF), and Complementary Tokens Integration (CTI). Thereinto, APC pieces informative patches from two images to generate a new image to mitigate the noisy calculation and reinforce the differences between images. CRF emphasizes tokens corresponding to discriminative regions to generate a new class token for subtle feature learning. To extract comprehensive information, CTI integrates complementary information captured by class tokens in different ViT layers. We conduct comprehensive experiments on four widely used datasets and the results demonstrate that ACC-ViT can achieve competitive performance. The source code is available at https://github.com/Hector0426/fine-grained-image-classification-with-vit.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call