Abstract

Machine learning is a popular approach to security monitoring and intrusion detection in cyber-physical systems (CPS) like the smart grid. However, these highly dynamic CPS operating in open environments can result in significant data distribution divergence, which may require the adaptation of a learned model. While transfer learning has been an effective approach to retain the performance against the divergence, there is still limited work on a more fundamental question that can be called <i>transferability</i>: when should one apply transfer learning? To address this challenge, this paper proposes a divergence-based transferability analysis to decide whether to apply transfer learning and autonomically adapt learning-based intrusion detectors. This work first identifies three metrics used to measure the divergence between data distributions, and then explores the relation between detector&#x2019;s accuracy drop and divergence in extensive temporal, spatial, and spatiotemporal experiments. Two regression models are trained to approximate the divergence-accuracy relation and then used to predict an accuracy drop which determines whether to apply transfer learning. Finally, a state-of-the-art domain adversarial neural network (DANN) classifier is adopted as the transfer learning model. Datasets from real normal operation profiles and simulated attacks are used to validate the effectiveness of the proposed transferability analysis against variations in attack timing, locations, and both. In all three scenarios, the proposed analysis demonstrated high accuracy in predicting accuracy drop from the divergence, with an RMSE lower than 4.20&#x0025;, and the DANN can be timely triggered to achieve an accuracy improvement over 5.00&#x0025;.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call