Abstract
Few-shot learning and meta-learning have been widely studied for their ability to reduce the burden of data annotation. However, real-world applications often involve training and target tasks from different source and target domains, leading to poor generalization of existing methods. To address this issue, we propose a novel method called RDProtoFusion, which leverages refined discriminative class prototypes to achieve task augmentation and bridge the domain shift. Firstly, we refine naive prototypes with query samples to avoid computation bias caused by the limited support size. Secondly, we perform multi-task fusion through class-representative prototypes, resulting in flexible task-augmentation with low complexity that effectively alleviates overfitting. Thirdly, we propose a prototype contrastive loss to obtain robust and accurate prototypes, by enhancing the discrimination between class prototypes. The proposed prototype refinement, fusion and contrast can improve generalization of the model effectively. We conduct extensive experiments on multiple cross-domain few-shot learning benchmarks, demonstrating that RDProtoFusion achieves state-of-the-art performance. Overall, our proposed method shows great potential for addressing the challenge of domain shift in few-shot learning, offering a promising avenue for real-world applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.