Abstract

Separable object parts, such as the head and tail in a bird, are vital for fine-grained visual classifications. For those objects without separable parts, the classification task relies only on local and global textural image features. Although the Swin Transformer architecture was proposed to efficiently capture both local and global visual features, it still exhibits a bias towards global features. Therefore, our goal is to enhance the local feature learning capability of the Swin Transformer by adding four new modules of the Local Feature Extraction Network (L-FEN), Convolution Patch-Merging (CP), Multi-Path (MP), and Multi-View (MV). The L-FEN enhances the Swin transformer with the improved local feature capture. The CP is a localized and hierarchical adaptation of the Swin’s Patch Merging technique. The MP method integrates features across various Swin stages to accentuate local details. Meanwhile, the MV Swin transformer block supersedes traditional Swin blocks with those incorporating varied receptive fields, ensuring a broader scope of local feature capture. Our enhanced architecture, named Global–Local Swin Transformer (GL-Swin), is applied to solve a fine-grained food classification task. On three major food datasets: ISIA Food-500 UEC Food-256, and Food-101, our GL-Swin achieved accuracies of 66.75%, 85.78%, and 92.93% respectively, consistently outperforming other leading methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call