Abstract

Intrusion detection based on federated learning allows the sharing of more high-quality attack samples to improve the intrusion detection performance of local models while preserving the privacy of local data. Most research on federated learning intrusion detection requires local models to be homogeneous. However, in practical scenarios, local models often include both homogeneous and heterogeneous models due to differences in hardware capabilities and business requirements among nodes. Additionally, there is still room for improvement in the accuracy of recognizing novel attacks in existing researches. To address the challenges mentioned above, we propose a Group-based Federated Knowledge Distillation Intrusion Detection approach. First, through a step-by-step grouping method, we achieve the grouping effect of intra-group homogeneity and inter-group heterogeneity, laying the foundation for reducing the aggregation difficulty in intra-group homogenous aggregation and inter-group heterogeneous aggregation. Second, in intra-group homogenous aggregation, a dual-objective optimization model is employed to quantify the learning quality of local models. Weight coefficients are assigned based on the learning quality to perform weighted aggregation. Lastly, in inter-group heterogeneous aggregation, the group leader model’s learning quality is used to classify and aggregate local soft labels, generating global soft labels. Group leader models utilize global soft labels for knowledge distillation to acquire knowledge from heterogeneous models. Experimental results on NSL-KDD and UNSW-NB datasets demonstrate the superiority of our proposed method over other algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call