Abstract

This study aims to develop a robust metalearning system for rapid classification on a large number of tasks. The model-agnostic metalearning (MAML) with the CACTUs method (clustering to automatically construct tasks for unsupervised metalearning) is improved as EW-CACTUs-MAML after integrated with the entropy weight (EW) method. Few-shot mechanisms are introduced in the deep network for efficient learning of a large number of tasks. The process of implementation is theoretically interpreted as “gene intelligence.” Validation of EW-CACTUs-MAML on a typical dataset (Omniglot) indicates an accuracy of 97.42%, performing better than CACTUs-MAML (validation accuracy = 97.22%). At the end of this paper, the availability of our thoughts to improve another metalearning system (EW-CACTUs-ProtoNets) is also preliminarily discussed based on a cross-validation on another typical dataset (Miniimagenet).

Highlights

  • A learning algorithm f is defined as a procedure for processing the data D to make predictions y􏽢∗ from every input x∗ [1]. at is, f is a particular function that maps x∗ to y􏽢∗

  • Differing from traditional machine learning, metalearning is interpreted as “learn to learn,” which can achieve (1), where the function from x∗i to y∗i can be presented as a universal metalearner [3]. e main research directions of metalearning include metalearning based on the metric space, metalearning based on parameter optimization, and model-based metalearning [1,2,3,4,5]. e datasets for metalearning are very large, and the automatic classification of learning tasks is always a great challenge [6]

  • The Omniglot dataset and the Miniimagenet dataset, will be employed . e Miniimagenet dataset has been widely used in the fields of metalearning and few-shot learning [31,32,33,34,35,36,37]. e famous original reference of the dataset is [37], where the matching networks for one-shot learning were presented to tackle a key challenge in machine learning—learning from a few examples

Read more

Summary

Introduction

A learning algorithm f is defined as a procedure for processing the data D to make predictions y􏽢∗ from every input x∗ [1]. at is, f is a particular function that maps x∗ to y􏽢∗. E datasets for metalearning are very large, and the automatic classification of learning tasks is always a great challenge [6]. Differing from traditional machine learning, metalearning is interpreted as “learn to learn,” which can achieve (1), where the function from x∗i to y∗i can be presented as a universal metalearner [3]. Due to this challenge, few engineering applications of metalearning are reported [7, 8].

Problem Formulation
Theoretical Analyses
Findings
Experiments and Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.