Diversified Task Augmentation with Redundancy Reduction for Cross-Domain Few-Shot Learning
Most existing Few-Shot Learning (FSL) works are based on the meta-learning framework, which uses tasks training models to obtain “meta-knowledge” and can quickly adapt to new tasks with limited data. The good performance of such works rely on a strict assumption that the training and testing tasks come from the same domain. However, domain shift is more common in practice. Therefore, Cross-Domain FewShot Learning (CD-FSL) has gradually become a research hotspot. To overcome the challenge of domain discrepancy, we propose a Diversified Task Augmentation with Redundancy Reduction (DTA-RR) approach. The DTA module is developed to expand the distribution of source domain tasks, which can bridge the domain gap by adapting biases from multiple enhanced versions of the same task and then extract domain-invariant information. In addition, we propose the RR module that can reduce redundant knowledge and make the obtained domain-invariant information more effective. We conduct extensive experiments on the BSCD-FSL benchmark. The results demonstrate the effectiveness of our model and outperform existing methods.