Abstract

Emerging machine learning (ML) technologies, in combination with the increasing computational power of mobile devices, lead to the extensive adoption of ML-based applications. Different from conventional model training that needs to collect all the user data in centralized cloud servers, federated learning (FL) has recently drawn increasing research attention as it enables privacy-preserving model training. With FL, decentralized edge devices in participation, train their model copies locally over their siloed datasets, and periodically synchronize the model parameters. However, model training is computationally extensive which easily drains the battery of mobile devices. In addition, due to the uneven distribution of siloed datasets, the shared model may become biased. To address the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">efficiency</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fairness</i> concerns in a resource-constrained federated learning setting, in this paper, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eiffel</i> to judiciously select mobile devices to participate in the global model aggregation, and adaptively adjust the frequency of local and global model updates. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eiffel</i> aims to make scheduling and coordination for the federated learning towards both resource efficiency and model fairness. We have conducted theoretical analysis of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eiffel</i> from the perspectives of fairness and convergence. Extensive experiments with a wide variety of real-world datasets and models, both on a networked prototype system and in a larger-scale simulated environment, have demonstrated that while maintaining similar accuracy performance, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Eiffel</i> outperforms existing baselines with respect to reducing communication overhead by up to 6× for higher efficiency and improving the fairness metric by up to 57% compared to the state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call