Abstract

Federated learning has gained popularity as it enables collaborative training without sharing local data. Despite its advantages, federated learning requires sharing the model parameters during model aggregation which poses security risks. In addition, existing secure federated learning frameworks cannot meet all the requirements of resource-constrained IoT devices and non-independent and identically distributed (non-IID) setting. This paper proposes a novel secure and robust federated learning framework (SRFL) with trusted execution environments (TEEs). The framework provides security and robustness for federated learning on IoT devices under non-IID data by leveraging TEEs to safeguard sensitive model components from being leaked. Simultaneously, we introduce a shared representation training approach to enhance the accuracy and security under non-IID setting. Furthermore, a multi-model robust aggregation method using membership degree is proposed to enhance robustness. This method uses membership degree generated by soft clustering to categorize clients for better aggregation performance. Additionally, we evaluate SRFL in a simulation environment, confirming that it improves accuracy by 5%–30% over FedAVG in non-IID setting and protects the model from membership inference attack and Byzantine attack. It also reduces backdoor attack success rate by 4%–10% more compared to other robust aggregation algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.