Abstract
While humans can excel at image classification tasks by comparing a few images, existing metric-based few-shot classification methods are still not well adapted to novel tasks. Performance declines rapidly when encountering new patterns, as feature embeddings cannot effectively encode discriminative properties. Moreover, existing matching methods inadequately utilize support set samples, focusing only on comparing query samples to category prototypes without exploiting contrastive relationships across categories for discriminative features. In this work, we propose a method where query samples select their most category-representative features for matching, making feature embeddings adaptable and category-related. We introduce a category alignment mechanism (CAM) to align query image features with different categories. CAM ensures features chosen for matching are distinct and strongly correlated to intra-and inter-contrastive relationships within categories, making extracted features highly related to their respective categories. CAM is parameter-free, requires no extra training to adapt to new tasks, and adjusts features for matching when task categories change. We also implement a cross-validation-based feature selection technique for support samples, generating more discriminative category prototypes. We implement two versions of inductive and transductive inference and conduct extensive experiments on six datasets to demonstrate the effectiveness of our algorithm. The results indicate that our method consistently yields performance improvements on benchmark tasks and surpasses the current state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.