Abstract

With the continuous development of vehicular ad hoc networks (VANET) security, using federated learning (FL) to deploy intrusion detection models in VANET has attracted considerable attention. Compared to conventional centralized learning, FL retains local training private data, thus protecting privacy. However, sensitive information about the training data can still be inferred from the shared model parameters in FL. Differential privacy (DP) is sophisticated technique to mitigate such attacks. A key challenge of implementing DP in FL is that non-selectively adding DP noise can adversely affect model accuracy, while having many perturbed parameters also increases privacy budget consumption and communication costs for detection models. To address this challenge, we propose FFIDS, a FL algorithm integrating model parameter pruning with differential privacy. It employs a parameter pruning technique based on the Fisher Information Matrix to reduce the privacy budget consumption per iteration while ensuring no accuracy loss. Specifically, FFIDS evaluates parameter importance and prunes unimportant parameters to generate compact sub-models, while recording the positions of parameters in each sub-model. This not only reduces model size to lower communication costs, but also maintains accuracy stability. DP noise is then added to the sub-models. By not perturbing unimportant parameters, more budget can be reserved to retain important parameters for more iterations. Finally, the server can promptly recover the sub-models using the parameter position information and complete aggregation. Extensive experiments on two public datasets and two F2MD simulation datasets have validated the utility and superior performance of the FFIDS algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call