Abstract

As the number of clients for federated learning (FL) has expanded to the billion level, a new research branch named secure federated submodel learning (SFSL) has emerged. In SFSL, mobile clients only download a tiny ratio of the global model from the coordinator’s global. However, SFSL provides little guarantees on the convergence and accuracy performance as the covered items may be highly biased. In this work, we formulate the problem of client selection through optimizing unbiased coverage of item index set for enhancing SFSL performance. We analyze the NP-hardness of this problem and propose a novel heuristic multi-group client selection framework by jointly optimizing index diversity and similarity. Specifically, heuristic exploration on some random client groups are performed progressively for an empirical approximate solution. Meanwhile, private set operations are used to preserve the privacy of participated clients. We implement the proposal by simulating large-scale SFSL application in a lab environment and conduct evaluations on two real-world data-sets. The results demonstrate the performance (w.r.t., accuracy and convergence speed) superiority of our selection algorithm than SFSL. The proposal is also shown to yield significant computation advantage with similar communication performance as SFSL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call