Abstract

In the distributed machine learning field, federated learning (FL) serves as a highly effective framework for dismantling data silos and integrating data from multiple sources. However, the flourishing of FL still encounters significant challenges, particularly in the realms of data security and information privacy. In recent years, plenty of privacy-preserving FL schemes have been proposed, yet few of them support global model confidentiality protection for the training server. In this paper, by combining Trusted Execution Environment (TEE) and homomorphic encryption, we propose a TEE-assisted federated logistic regression training scheme with model confidentiality protection, named TFLR. Specifically, we first formalize a cryptography-TEE hybrid security model under the multi-party cooperative computation scenario. Subsequently, within this hybrid security model, we design a series of secure computation protocols with semi-honest TEE to execute non-linear operations while maintaining the encryption form of global model parameters. Therefore, the global model of the training server can be well protected. In addition, TFLR incorporates a double masking technique, further fortifying the privacy of data owners' local updates. Detailed security analysis shows that TFLR is able to protect all sensitive information of data owners and the training server. Furthermore, we evaluate the performance of TFLR with real machine learning datasets, and the results substantiate that TFLR is indeed efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call