To solve problems such as the unstable detection performance of the sound anomaly detection of wind turbine gearboxes when only normal data are used for training, and the poor detection performance caused by the poor classification of samples with high similarity, this paper proposes a self-supervised wind turbine gearbox sound anomaly detection algorithm that fuses time-domain features and Mel spectrograms, improves the MobileFaceNet (MFN) model, and combines the Gaussian Mixture Model (GMM). This method compensates for the abnormal information lost in Mel spectrogram features through feature fusion and introduces a style attention mechanism (SRM) in MFN to enhance the expression of features, improving the accuracy and stability of the abnormal sound detection model. For the wind turbine gearbox sound dataset of a certain wind farm in Guangyuan, the average AUC of the sound data at five measuring point positions of the wind turbine gearbox using the method proposed in this paper, STgram-MFN-SRM, reached 96.16%. Compared with the traditional anomaly detection methods LogMel-MFN, STgram-MFN, STgram-Resnet50, and STgram-MFN-SRM(CE), the average AUC of sound detection at the five measuring point positions increased by 5.19%, 4.73%, 11.06%, and 2.88%, respectively. Therefore, the method proposed in this paper effectively improves the performance of the sound anomaly detection model of wind turbine gearboxes and has important engineering value for the healthy operation and maintenance of wind turbines.
Read full abstract