Abstract

Federated Learning (FL) allows multiple clients to collaboratively train machine learning models while preserving the model privacy of the clients. However, when generating a global model during the aggregation process, a malicious FL server could derive clients’ local model weights. Such a threat cannot be completely eliminated, even if model aggregation is performed in the Trusted Execution Environment of the server, due to memory access pattern attacks. To tackle this challenge, our paper focuses on the top-k model aggregation algorithm in FL and introduces a space efficient and oblivious sparsified model aggregation algorithm named Sort-Then-Insert (STI), which removes the dependency of the memory access pattern on the model input, thus protecting the confidentiality of clients’ models. Compared with Path ORAM, STI is over 100 times faster, and compared with the state-of-the-art solution for the same problem, Olive, our theoretical analysis and experiments demonstrate that STI can achieve comparable performance overhead (O(nklog2(nk))+O(dlog2(d)) for STI vs. O((nk+d)log2(nk+d)) for Olive) and reduced space overhead (max(O(nk),O(d)) for STI vs. O(nk+d) for Olive).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call