7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access
7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access
https://doi.org/10.1016/j.eng.2021.07.033
Copy DOIJournal: Engineering | Publication Date: Dec 1, 2022 |
Citations: 3 | License type: cc-by-nc-nd |
Recently developed fault classification methods for industrial processes are mainly data-driven. Notably, models based on deep neural networks have significantly improved fault classification accuracy owing to the inclusion of a large number of data patterns. However, these data-driven models are vulnerable to adversarial attacks; thus, small perturbations on the samples can cause the models to provide incorrect fault predictions. Several recent studies have demonstrated the vulnerability of machine learning methods and the existence of adversarial samples. This paper proposes a black-box attack method with an extreme constraint for a safe-critical industrial fault classification system: Only one variable can be perturbed to craft adversarial samples. Moreover, to hide the adversarial samples in the visualization space, a Jacobian matrix is used to guide the perturbed variable selection, making the adversarial samples in the dimensional reduction space invisible to the human eye. Using the one-variable attack (OVA) method, we explore the vulnerability of industrial variables and fault types, which can help understand the geometric characteristics of fault classification systems. Based on the attack method, a corresponding adversarial training defense method is also proposed, which efficiently defends against an OVA and improves the prediction accuracy of the classifiers. In experiments, the proposed method was tested on two datasets from the Tennessee–Eastman process (TEP) and Steel Plates (SP). We explore the vulnerability and correlation within variables and faults and verify the effectiveness of OVAs and defenses for various classifiers and datasets. For industrial fault classification systems, the attack success rate of our method is close to (on TEP) or even higher than (on SP) the current most effective first-order white-box attack method, which requires perturbation of all variables.
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.