Abstract

In recent years, relying on training with thousands of labeled samples, deep learning has achieved remarkable success in the field of computer vision. However, in practice, annotating samples is a time-consuming and laborious task, which means that it is impractical to obtain thousands of labeled data. Humans have the ability to learn the knowledge of new concepts from only a handful of examples, which makes it easier to adapt to new environments. Inspired by this ability of human beings, few-shot learning aims at training a classifier that can learn to recognize new classes when only given a few labeled samples of these classes. In this paper, we propose a new framework called Adaptive Learning Knowledge Networks (ALKN) for few-shot learning. ALKN learns the knowledge of different classes from the features of labeled samples and store the learned knowledge into memory which will be dynamically updated during the learning process. We define the difficult knowledge and easy knowledge of each class so that when performing inference, our model can holistically leverage the memory of learned knowledge more efficiently. Considering the situations of standard few-shot learning and semi-supervised few-shot learning, we design different update strategies for the memory of learned knowledge. Extensive experiments are conducted on three datasets, Omniglot, Mini-Imagenet and CUB. Compared with the most existing approaches, our ALKN achieves superior results on those benchmarks.

Highlights

  • In recent years, deep learning [10]–[12], [24], [31] has achieved great success in the field of computer vision

  • Some impressive results have been achieved for deep learning models on image classification tasks such as object recognition [25] and scene classification [26]

  • In this work, we propose the Adaptive Learning Knowledge Networks (ALKN), which can learn the knowledge of new concept with only a handful of examples

Read more

Summary

Introduction

Deep learning [10]–[12], [24], [31] has achieved great success in the field of computer vision. To achieve the great performance, a deep learning model need to be trained on a large-scale dataset. A. FEW-SHOT LEARNING In the task of few-shot learning, the dataset is manually divided into three parts: a training set, a test set and a support set. The standard setting for N-way, K-shot few-shot learning is training a classifier with the support set that contains K labeled samples of N unique classes. Each class in the support set has only a few or even one labeled example, the classifier usually can’t achieve satisfying result on the test set. We aim to train the classifier on the training set, by simulating the learning process of support set, the model can perform better few-shot learning on the support set and predict the classes of examples more successfully

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call