Abstract

Existing deep learning (DL) algorithms are based on a large amount of training data and they face challenges in effectively extracting fault features when dealing with few-shot fault diagnoses. Model-agnostic meta-learning (MAML) also faces some challenges, including the limited capability of the basic convolutional neural network (CNN) with a single convolutional kernel to extract fault features comprehensively, as well as the instability of model training due to the inner and outer double-layer loops. To address these issues, this paper presents a multi-step loss meta-learning method based on multi-scale feature extraction (MFEML). Firstly, an improved multi-scale feature extraction module (IMFEM) is designed to solve the problem of the insufficient feature extraction capability of the CNN. Secondly, the multi-step loss is used to reconstruct the meta-loss to address the issue of MAML training instability. Finally, the experimental results of two datasets demonstrate the effectiveness of the MFEML.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call