Abstract

Deep learning techniques have been widely applied for performing intelligent fault diagnosis (IFD) for applications such as bearing fault diagnosis, wind turbines and drilling operations. Notwithstanding the huge success of deep learning-based models for intelligent fault diagnosis, their vulnerability against adversarial attacks have been neglected to a great extent. The adversarial attacks aim to craft an adversarial sample by addition of imperceptible perturbation to its clean data sample. In this paper, we investigate the performance of different deep models against four state-of-the-art adversarial attacks. A total of four deep models have been tested against untargeted white-box adversarial attacks. Moreover, analysis of transferability of adversarial examples across different deep models is also inspected. Experiments and results reveal that deep models for IFD are highly susceptible to adversarial examples crafted with four state-of-the-art adversarial attacks. The proposed work presents an extensive insight on adversarial samples of machinery vibration signals of the CWRU dataset. Additionally, we are also releasing a first ever adversarial attack dataset for IFD, i.e. Adv-IFD. The code used in this work and adversarial attack datasets are available at: https://github.com/achyutmani/ADV-IFD

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call