Abstract

It can be challenging to learn algorithms due to the research of business-related few-shot classification problems. Therefore, in this paper, we evaluate the classification of few-shot learning in the commercial field. To accurately identify the categories of few-shot learning problems, we proposed a probabilistic network (PN) method based on few-shot and one-shot learning problems. The enhancement of the original data was followed by the subsequent development of the PN method based on feature extraction, category comparison, and loss function analysis. The effectiveness of the method was validated using two examples (absenteeism at work and Las Vegas Strip hotels). Experimental results demonstrate the ability of the PN method to effectively identify the categories of commercial few-shot learning problems. Therefore, the proposed method can be applied to business-related few-shot classification problems.

Highlights

  • Since 2015, most of the research studies on few-shot learning have focused on neural networks [14,15,16,17,18]

  • As early as 2001, the memory-based neural network method was proved to be applicable to meta-learning [19], whereby the bias and output were adjusted by updating weights and learning to quickly cache expressions into the memory, respectively. e authors used long short-term memory (LSTM) and other recurrent neural network (RNN) to treat the model data as a sequence for training and inputted new class samples for classification during testing

  • In the Siamese network, two networks with the same parameters are used to extract features of the two samples. en, the extracted features are inputted into the discriminator to determine whether the two samples belong to the same object class [26]. e matching network builds encoders for the supporting and query sets, and the output of the final classifier is the weighted sum of the predicted values between the supporting and query set samples [27]. e prototype network maps the sample data in each category to a given space and extracts their “mean” and Euclidean distance to represent the class prototype and distance measurement, respectively. us, the training data and class prototype exhibit the closest distance compared to other prototypes [28]

Read more

Summary

Introduction

Since 2015, most of the research studies on few-shot learning have focused on neural networks [14,15,16,17,18]. As early as 2001, the memory-based neural network method was proved to be applicable to meta-learning [19], whereby the bias and output were adjusted by updating weights and learning to quickly cache expressions into the memory, respectively. The gradient descent algorithm is used to optimize the weights in the neural networks, which is typically a slow process. MAML is a general optimization algorithm that expands the differential calculation process through the computational diagrams of the gradient descent method and learns a model that includes tasks but not samples. Reptile and MAML are gradient-based meta-optimizations and are model-independent. E optimizer performs a multistep gradient descent algorithm on each training task and updates the model with the results of the last step. Las Vegas Strip hotels can valid that the PN method could identify the categories of commercial few-shot learning problems

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call