Abstract

Fine-grained visual categorization (FGVC) aims to distinguish visual objects from multiple subcategories of the coarse-grained category. Subtle inter-class differences among various subcategories make the FGVC task more challenging. Existing methods primarily focus on learning salient visual patterns while ignoring how to capture the object's internal structure, causing difficulty in obtaining complete discriminative regions within the object to limit FGVC performance. To address the above issue, we propose a Structure Information Mining and Object-aware Feature Enhancement (SIM-OFE) method for fine-grained visual categorization, which explores the visual object's internal structure composition and appearance traits. Concretely, we first propose a simple yet effective hybrid perception attention module for locating visual objects based on global-scope and local-scope significance analyses. Then, a structure information mining module is proposed to model the distribution and context relation of critical regions within the object, highlighting the whole object and discriminative regions for distinguishing subtle differences. Finally, an object-aware feature enhancement module is proposed to combine global-scope and local-scope discriminative features in an attentive coupling way for powerful visual representations in fine-grained recognition. Extensive experiments on three FGVC benchmark datasets demonstrate that our proposed SIM-OFE method can achieve state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.