Abstract

Few-shot classification (FSC) has become a significant area of research in recent years. A prevalent method in this field is the separation of feature representations from classifiers, involving two key steps: pre-training a feature extractor on a large base dataset and then applying it to novel, category-diverse data with limited samples. The ability of the pre-trained extractor to adapt to new data is vital for effective feature discrimination and classification performance. Traditional evaluation techniques, such as Class Activation Mapping (CAM), encounter challenges due to the lack of back-propagation when extracting features from novel classes. To address this, we present Feedback-Irrelevant Mapping (FIM), an approach based on dictionary learning theory. FIM utilizes channel weights in feature maps for both visualization and performance evaluation of the extractor on new data. We also introduce two quantitative metrics, Visual Localization Accuracy (VLA) and Visual Intersection Accuracy (VIA), to enable consistent assessments. Our experiments demonstrate that FIM not only preserves the model’s performance but can also improve it by approximately 0.5%. When applied to various decoupled FSC frameworks, FIM effectively evaluates their capabilities. An important finding is that larger pre-trained models do not always equate to superior FSC results. For instance, models with a ResNet-12 backbone have shown to surpass others by at least 1.7%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.