Abstract

Federated Learning (FL) is a distributed machine learning paradigm to solve isolated data island problems under privacy constraints. Recent works reveal that FL still exists security problems in which attackers can infer private data from gradients. In this paper, we propose a distributed FL framework in Trusted Execution Environment (TEE) to protect gradients in the perspective of hardware. We use trusted Software Guard eXtensions (SGX) as an instance to implement the FL, and proposed an SGX-FL framework. Firstly, to break through the limitation of physical memory space in SGX and meanwhile preserve the privacy, we leverage a gradient filtering mechanism to obtain the “important” gradients which preserve the utmost data privacy and put them into SGX. Secondly, to enhance the global adhesion of gradients so that the important gradients can be aggregated at maximum, a grouping method is carried out to put the most appropriate number of members into one group. Finally, to keep the accuracy of the FL model, the secondary gradients of group members and aggregated important gradients are simultaneously uploaded to the server and the computation procedure is validated by the integrity method of SGX. The evaluation results show that the proposed SGX-FL reduces the computation cost by 19 times compared with the existing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call