Few-shot learning has garnered significant attention in deep learning as an effective approach for addressing the issue of data scarcity. Conventionally, training datasets in few-shot learning are clean. However, due to the occurrence of sensor malfunctions, data transmission anomalies, or inaccuracies in manual annotation, the accuracy of sample annotations cannot be guaranteed. Therefore, dealing with few-shot learning with noisy label becomes an urgent problem. To solve the above problem, we propose an Attention-based Pseudo-label Propagation Network (APPN). We make some technical contributions: (1) We propose an attention-based feature extraction method that can effectively capture the distinctions between clean and noisy samples. (2) We propose an improved graph-based pseudo-label propagation method that utilizes pseudo-labels with latent class information as initial labels, thereby enhancing the accuracy of label propagation. (3) We describe a comprehensive multi-step noise detection method which can accurately detect noisy samples from clean samples and effectively distinguish between out-of-domain (OOD) noise and in-domain (ID) noise. (4) Finally, the solid experiments verify the better performance on FC100, CIFAR-FS, miniImageNet, and tieredImageNet datasets. The results show that our method has high robustness and superiority in the few-shot learning with noisy labels. Code and models at: https://github.com/Typistchen/APPN.
Read full abstract