Abstract

While human intelligence can easily recognize some characteristics of classes with one or few examples, learning from few examples is a challenging task in machine learning. Recently emerging deep learning generally requires hundreds of thousands of samples to achieve generalization ability. Despite recent advances in deep learning, it is not easy to generalize new classes with little supervision. Few-shot learning (FSL) aims to learn how to recognize new classes with few examples per class. However, learning with few examples makes the model difficult to generalize and is susceptible to overfitting. To overcome the difficulty, data augmentation techniques have been applied to FSL. It is well-known that existing data augmentation approaches rely heavily on human experts with prior knowledge to find effective augmentation strategies manually. In this work, we propose an efficient data augmentation network, called EDANet, to automatically select the most effective augmentation approaches to achieve optimal performance of FSL without human intervention. Our method overcomes the disadvantages of relying on domain knowledge and requiring expensive labor to design data augmentation rules manually. We demonstrate the proposed approach on widely used FSL benchmarks (Omniglot and mini-ImageNet). The experimental results using three popular FSL networks indicate that the proposed approach improves performance over existing baselines through an optimal combination of candidate augmentation strategies.

Highlights

  • O VER the past decade, we have witnessed remarkable performance gain with deep learning in many tasks, such as classification [2], [3], detection [4], [5], and segmentation [6], [7]

  • Few-shot learning (FSL) approaches can be divided into two major streams; 1) how to make up for insufficient data by adding supporting data [22], [23] and 2) how to represent large parameter space covered by few training samples [25], [26]

  • ABLATION STUDY 1) Effectiveness of automatic augmentation We demonstrate the effectiveness of the proposed EDANet that explores an optimal augmentation strategy from a pool of candidate strategies in metric-based FSL and compare it with manual augmentation approaches

Read more

Summary

Introduction

O VER the past decade, we have witnessed remarkable performance gain with deep learning in many tasks, such as classification [2], [3], detection [4], [5], and segmentation [6], [7]. The extensive calculation of deep learning through artificial neural networks, advances in computing power, and a large number of labeled examples have allowed deep learning models to achieve performance improvement. Many researchers have attempted to learn deep learning algorithms with few examples, and the field of few-shot learning (FSL) [17], [18] has been recently emerged. Few-shot learning recognizes patterns in data with few examples. FSL approaches can be further categorized into four families: metric-based [19]–[21], data augmentationbased [22]–[24] optimization-based [25], [26], and semanticbased approaches [27], [28]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.