Currently, various algorithmic models encounter numerous challenges in practical applications, such as noise, interference, and input changes, which can significantly impact their performance. Many methods have been proposed to enhance model robustness. However, to assess the effectiveness of these improvements, it is generally necessary to compare the model’s performance before and after applying the same noise and analyze the resulting changes. Moreover, evaluating the robustness of multiple models that meet basic requirements for a specific task requires qualitative analysis using specific indicators. This is especially crucial in fault diagnosis where multiple types of noise interference in the data can hinder accurate fault classification. Addressing this situation, this paper presents a quantitative evaluation method for the robustness of intelligent fault diagnosis algorithms based on the self-attention mechanism. The proposed method entails dividing the dataset into sub-datasets according to signal-to-noise ratio after injecting noise, separately calculating sub-indicators after training, dynamically assigning weights to these indicators using the self-attention mechanism and combining the weights of different sub-indicators to generate a comprehensive evaluation value for assessing robustness. The proposed method was validated through experiments involving three models, and the results demonstrated the reliability of this quantitative calculation approach for robustness.
Read full abstract