Abstract

The widespread adoption of intelligent machines and sensors has generated vast amounts of time series data, leading to the increasing use of neural networks in time series classification. Federated learning has emerged as a promising machine learning paradigm that reduces the risk of user privacy leakage. However, federated learning is vulnerable to backdoor attacks, which pose significant security threats. Furthermore, existing unrealistic white-box methods for attacking time series result in insufficient adaptation and inferior stealthiness. To overcome these limitations, this paper proposes a gradient-free black-box method called local perturbation-based backdoor Federated Learning Attack for Time Series classification (FLATS). The attack is formulated as a constrained optimization problem and is solved using a differential evolution algorithm, without requiring any knowledge of the internal architecture of the target model. In addition, the proposed method considers the time series shapelet interval as a local perturbation range and adopts a soft target poisoning approach to minimize the difference between the attacker model and the benign model. Experimental results demonstrate that our proposed method can effectively attack federated learning time series classification models with potential security issues while generating imperceptible poisoned samples that can evade various defence methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call