Abstract

Image classification in real-world applications is a challenging task due to the lack of labeled data. Many few-shot learning techniques have been developed to tackle this problem. However, existing few-shot learning techniques will fail if new classes are added to the model without any labeled data (out-of-distribution). The existing techniques will classify the new classes as one of the existing classes, and will not be able to detect them as new classes. Also, if all data is unlabeled, existing few-shot learning techniques will not work (because existing techniques are supervised learning). This paper proposes a novel few-shot learning network (KTNet) that can learn from unlabeled data and assign a pseudo label to these data. These pseudo labeled data will be either added to the existing labeled data (in-distribution) to increase the number of shots or added as new classes if the data is out-of-distribution. The proposed KTNet technique can work in all cases, 1) a small amount of labeled data exist for all classes, 2) labeled data exist for a subset of all classes, and 3) if existing data for all classes is unlabeled. KTNet is evaluated using two benchmark datasets (mini-imagenet and fewshot-CIFAR). The results show that the proposed network outperforms state-of-the-art models in both datasets with respect to classification accuracy. Also, KTNet is better than existing techniques at detecting and clustering the out-of-distribution classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call