Abstract

In modern industries, data-driven fault detection and classification (FDC) systems can efficiently maintain industrial security and stability, while the security of the data-driven FDC system itself is rarely or even never considered. The security problem named adversarial vulnerability is the intrinsic of data-driven machine learning models, which will give incorrect predictions under the maliciously perturbed input data. This paper presents a work on this new security topic of the data-driven FDC systems, by 1) summarizing and comparing various recent and typical adversarial attack and defense methods for fault classifiers; 2) proposing novel attack and defense techniques for unsupervised fault detectors; 3) constructing a novel industrial adversarial security benchmark on FDC systems in the Tennessee-Eastman process (TEP) dataset; 4) exploring and discussing which attack is most potentially threatening for FDC systems and which defense technique is most applicable to mitigate attacks. The results reveal unique security properties of FDC systems, mainly including 1) for fault classifiers, black-box attack is close to the attack strength of white-box FGSM and the universal transferable attack is not significantly stronger than random noise; 2) weak adversarial training is excellent with high adversarial accuracy improvement and negligible clean accuracy decrease; 3) fault detectors are intrinsically more robust, and can be well protected by strong adversarial training. More intriguing properties and profound insights are demonstrated in the paper. This pioneering work could guide researchers and practitioners in discovering and navigating the field of FDC system adversarial robustness, outlining the research directions and open problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call