Abstract

In Cross-Domain Few-Shot Classification, researchers mainly utilize models which trained with source domain tasks to adapt to the target domain with very few samples, thus causing serious class-difference-caused domain differences. Although researchers have proposed methods to minimize the domain differences, the existing methods have the following drawbacks: 1) most models do not utilize the common knowledge between the source and target domains, and 2) require additional labeled samples from the target domain for finetuning or domain alignment, which is hard to obtain in reality. To address the problem mentioned above, we propose a class-shared and class-specific dictionaries (CSCSD) learning method. To make better utilization of the common knowledge, we apply a class-shared dictionary which is learned to represent the generality of source and target domain. Moreover, class-specific dictionaries are applied to represent the class-specific knowledge that can’t be represented in the class-shared dictionary. Furthermore, unlike most other models, our CSCSD does not require additional target domain samples to meta-train or finetune. With the dictionaries, CSCSD can obtain more distinguishable collaborative representations of samples with the origin representations extracted with the model. To evaluate the effectiveness of CSCSD, we utilize larger datasets, e.g., MiniImageNet and TieredImageNet as source domains and fine-grained datasets, e.g., CUB, Cars, Places, and Plantae as target domains. With our CSCSD, the Cross-Domain Few-Shot accuracy exceeds most domain adaptive Few-Shot which utilizes additional training set in target domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call