Abstract

Data-driven electricity theft detectors rely on customers’ reported energy consumption readings to detect malicious behavior. One common implicit assumption in such detectors is the correct labeling of the training data. Unfortunately, these detectors are vulnerable against data poisoning attacks that assume false labels during training. This article addresses three major problems: What is the impact of data poisoning attacks on the detector’s performance? Which detector is more robust against data poisoning attacks, i.e., generalized or customer-specific detectors? How to improve the detector’s robustness against data poisoning attacks? Our investigations reveal that: (a) Shallow and deep learning-based detectors suffer from data poisoning attacks that may lead to a significant deterioration of detection rate of up to 17%. Furthermore, deep detectors offer 12% performance improvement over shallow detectors. (b) Generalized detectors present 4% performance improvement over customer-specific detectors even in the presence of data poisoning attacks. To enhance the detectors’ robustness against data poisoning attacks, we propose a sequential ensemble detector based on a deep auto-encoder with attention (AEA), gated recurrent units (GRUs), and feed forward neural networks. The proposed robust detector retains a stable detection performance that is deteriorated only by 1 – 3% in the presence of strong data poisoning attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.