Abstract
Federated learning (FL) has attracted significant interest given its prominent advantages and applicability in many scenarios. However, it has been demonstrated that sharing updated gradients/weights during the training process can lead to privacy concerns. In the context of the Internet of Things (IoT), this can be exacerbated due to intrusion detection systems (IDSs), which are intended to detect security attacks by analyzing the devices’ network traffic. Our work provides a comprehensive evaluation of differential privacy techniques, which are applied during the training of an FL-enabled IDS for industrial IoT. Unlike previous approaches, we deal with nonindependent and identically distributed data over the recent ToN_IoT dataset, and compare the accuracy obtained considering different privacy requirements and aggregation functions, namely FedAvg and the recently proposed Fed+. According to our evaluation, the use of Fed+ in our setting provides similar results even when noise is included in the federated training process.
Highlights
As the Internet of Things (IoT) expands, there is a significant increase in the number and impact of security vulnerabilities and threats associated with IoT devices and systems To cope with such concerns, Intrusion Detection Systems (IDS) represent a well-known approach for early detection of IoT attacks and cyber-threats [1]
Our work provides a comprehensive evaluation of Differential Privacy (DP) approaches through several additive noise techniques based on Gaussian and Laplacian distributions, which are applied during the training of an Federated Learning (FL)-enabled IDS for Industrial IoT (IIoT)
We compared different noise addition techniques based on Gaussian and Laplacian distributions, and assessed the accuracy obtained using Fed+ as an alternative aggregation function to FedAvg that has recently been proposed to deal with non-iid data distributions, which are prevalent in the real world
Summary
As the Internet of Things (IoT) expands, there is a significant increase in the number and impact of security vulnerabilities and threats associated with IoT devices and systems To cope with such concerns, Intrusion Detection Systems (IDS) represent a well-known approach for early detection of IoT attacks and cyber-threats [1]. AI-based IDS are trained considering monitored network traffic and behavioural data from heterogeneous IoT devices deployed in remote, possibly untrusted, and distributed domains and systems, to increase the overall accuracy for attack detection. This approach sparks privacy issues as different domains might need to share their private data [3]. As an alternative to typical centralized learning approaches, Federated Learning (FL) was proposed in 2016 [4] as a collaborative learning approach in which an AI algorithm is trained locally across multiple decentralized edge devices, called clients or parties, and the information is continuously updated onto a global model through several training rounds. DP is usually considered in the scope of FL settings due to the stringent communication requirements of other privacy-preserving approaches, such as Secure Multiparty Computation (SMC) [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.