Abstract
Artificial Intelligence of Things (AIoT), as a fusion of artificial intelligence (AI) and Internet of Things (IoT), has become a new trend to realize the intelligentization of industry 4.0 and the data privacy and security is the key to its successful implementation. To enhance data privacy protection, the federated learning has been introduced in AIoT, which allows participants to jointly train AI models without sharing private data. However, in federated learning, malicious participants might provide malicious models by launching the poisoning attack, which will jeopardize the convergence and accuracy of the global model. To solve this problem, we propose a malicious model detection mechanism based on the isolation forest (iforest), named D2MIF, for the federated learning-empowered AIoT. In D2MIF, an iforest is constructed to compute the malicious score for each model uploaded by the corresponding participant, and then, the models will be filtered if their malicious scores are higher than the threshold, which is dynamically adjusted using reinforcement learning (RL). The validation experiment is conducted on two public data sets Mnist and Fashion_Mnist. The experimental results show that the proposed D2MIF can effectively detect malicious models and significantly improve the global model accuracy in federated learning-empowered AIoT.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.