Abstract

Humans are capable of learning new concepts from small numbers of examples. In contrast, supervised deep learning models usually lack the ability to extract reliable predictive rules from limited data scenarios when attempting to classify new examples. This challenging scenario is commonly known as few-shot learning. Few-shot learning has garnered increased attention in recent years due to its significance for many real-world problems. Recently, new methods relying on meta-learning paradigms combined with graph-based structures, which model the relationship between examples, have shown promising results on a variety of few-shot classification tasks. However, existing work on few-shot learning is only focused on the feature embeddings produced by the last layer of the neural network. The novel contribution of this paper is the utilization of lower-level information to improve the meta-learner performance in few-shot learning. In particular, we propose the Looking-Back method, which could use lower-level information to construct additional graphs for label propagation in limited data settings. Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.

Highlights

  • Deep learning (DL) is already ubiquitous in our daily lives, including image-based object detection [1], face recognition [2], medical imaging, and healthcare [3]

  • While DL is outperforming traditional machine learning methods in these aforementioned application areas [4], a major downside of DL is that it requires large amounts of data to achieve good performance [5]

  • We propose a novel Few-shot learning (FSL) meta-learning method, Looking-Back, that utilizes lower-level information from hidden layers, which is different from existing FSL methods that only use feature embedding of the last layer during meta-training

Read more

Summary

Introduction

Deep learning (DL) is already ubiquitous in our daily lives, including image-based object detection [1], face recognition [2], medical imaging, and healthcare [3]. While DL is outperforming traditional machine learning methods in these aforementioned application areas [4], a major downside of DL is that it requires large amounts of data to achieve good performance [5]. In FSL settings, datasets are comprised of large numbers of categories (i.e., class labels), but only a few examples per class are available. The main objective of FSL is the design of methods that achieve good generalization performance from the limited number of examples per category. The overarching concept of FSL is very general and applies to different data modalities and tasks like image classification [6], object detection [7], and text classification [8]. Most FSL research is focused on image classification so that we will use the terms examples and images (in a supervised learning context) interchangeably

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.