Abstract

In recent years, among most approaches for few-shot learning, there exists a default premise that a big homogeneous-annotated dataset is applied to pre-train the few-shot learning model. However, since few-shot learning approaches are always used in the domain where annotated samples are rare, it would be difficult to collect another big annotated dataset in the same domain. Therefore, we propose Splicing Learning to complete the few-shot learning task without the help of a big homogeneous-annotated dataset. Splicing Learning can increase the sample size of the few-shot set by splicing multiple original images to a spliced-image. Unlike data augmentation technologies, there is no false information on the spliced-image. Through experiments, we find that the configuration “All-splice + WSG” can achieve the best test accuracy of 90.81%, 9.19% better than the baseline. The performance improvement of the model can be attributed to Splicing Learning mostly and has little to do with the complexity of the CNN framework. Compared with metric learning, meta-learning, and GAN models, both of Splicing Learning and data augmentation have achieved more outstanding performance. At the same time, the combination of Splicing Learning and data augmentation can further improve the test accuracy of the model to 96.33%. The full implementation is available at https://github.com/xiangxiangzhuyi/Splicing-learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call