Abstract

The robustness of Machine learning (ML) models has gained much attention along with their wide application on various safety required Industrial Internet of Things (IIoT) paradigms. Researchers found that some specific attacks added on sensor measurements can maliciously disturb IIoT monitors that are designed using ML architectures. Traditional detection methods could judge whether the measurements are attacked to prevent the failure of monitors. Unfortunately, recent works argue that commonly used detection methods could be circumvented through adaptive attacks that could acquire the mechanism of detectors; they could not truly enhance the robustness of ML models. Instead, general robust mechanisms should be performed to authentically enhance the robustness of models against any potential attacks with specific restrictions. On the basis of the above argument, we design a robust condition monitor for predicting the fault condition of IIoT systems using the adversarial training technique called robust temporal convolutional network (RTCN). The model is designed to be formally robust to attacks with restricted magnitude. The temporal convolutional network (TCN) is employed to design the base structure of the monitor. TCN can capture temporal information from sensors to enhance the feature extraction performance of models. We also present a novel False data injection (FDI) attack generating method that utilizes the conception of adversarial perturbations to disturb well-trained monitors. Experimental results verify the efficiency of feature extraction performance of our model from IIoT systems. Furthermore, adversarial training mechanism through a min-max manner could effectively improve the reliability of ML-based IIoT monitors against strong FDI attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call