Abstract

The shift between the source and target domains is the main challenge in cross-domain few-shot learning (CD-FSL) tasks. However, the target domain is unknown during training in the source domain, which results in a lack of directed guidance for target tasks. We observe that, owing to similar backgrounds in the target domains, self-labelled samples can be applied as prior tasks to transfer knowledge onto target tasks. Accordingly, we propose a task-expansion-decomposition framework for CD-FSL called the self-taught (ST) approach, which alleviates the problem of non-target guidance by constructing task-oriented metric spaces. Specifically, weakly supervised object localization (WSOL) and self-supervised technologies are employed to enrich task-oriented samples by exchanging and rotating discriminative regions, which generates a more abundant task set. Then, these tasks are decomposed into several tasks to complete few-shot recognition and rotation classification tasks. Transferring the source knowledge onto the target tasks and focusing on discriminative regions is beneficial to the task completion. We conducted extensive experiments under a cross-domain setting, including eight target domains: CUB, Cars, Places, Plantae, CropDieases, EuroSAT, ISIC, and ChestX. The experimental results demonstrate that the proposed ST approach is applicable to various metric-based models and provides promising improvements to CD-FSL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call