Abstract

The aim of meta-learning is to train the machine to learn quickly and accurately. Improving the performance of the meta-learning model is important in solving the problem of small samples and in achieving general artificial intelligence. A meta-learning method based on feature embedding that exhibits good performance on the few-shot problem was previously proposed. In this method, the pretrained deep convolution neural network was used as the embedding model of sample features, and the output of one layer was used as the feature representation of samples. The main limitation of the method is the inability to fuse low-level texture features and high-level semantic features of the embedding model and joint optimization of the embedding model and classifier. Therefore, a multilayer adaptive joint training and optimization method of the embedding model was proposed in the current study. The main characteristics of the current method include using multilayer adaptive hierarchical loss to train the embedding model and using the quantum genetic algorithm to jointly optimize the embedding model and classifier. Validation was performed based on multiple public datasets for meta-learning model testing. The proposed method shows higher accuracy compared with multiple baseline methods.

Highlights

  • Academic Editor: Punit Gupta e aim of meta-learning is to train the machine to learn quickly and accurately

  • The pretrained deep convolution neural network was used as the embedding model of sample features, and the output of one layer was used as the feature representation of samples. e main limitation of the method is the inability to fuse low-level texture features and high-level semantic features of the embedding model and joint optimization of the embedding model and classifier. erefore, a multilayer adaptive joint training and optimization method of the embedding model was proposed in the current study. e main characteristics of the current method include using multilayer adaptive hierarchical loss to train the embedding model and using the quantum genetic algorithm to jointly optimize the embedding model and classifier

  • Validation was performed based on multiple public datasets for meta-learning model testing. e proposed method shows higher accuracy compared with multiple baseline methods

Read more

Summary

Research Article

A Joint Optimization Framework of the Embedding Model and Classifier for Meta-Learning. A meta-learning method based on feature embedding that exhibits good performance on the few-shot problem was previously proposed. In this method, the pretrained deep convolution neural network was used as the embedding model of sample features, and the output of one layer was used as the feature representation of samples. E new method makes use of supervised learning to train a deep neural network to represent the features of the samples. It explores the performance of small samples embedded in different layers of the network on the classifier in the second stage of training. E meta-training set is used to supervise training individual n by the method of subsection 3.3

Choose the best individual Using meta test set to test individuals
Train Logistic regression classifier Inference
Experiments
Globe Pooling FC FC
Ours simple Ours
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call