Abstract
A variety of modern AI products essentially require raw user data for training diverse machine learning models. With the increasing concern on data privacy, federated learning, a decentralized learning framework, enables privacy-preserving training of models by iteratively aggregating model updates from participants, instead of aggregating raw data. Since all the participants, i.e., mobile devices, need to transfer their local model updates concurrently and iteratively over mobile edge networks, the network is easily overloaded, leading to a high risk of transmission failures. Although previous works on transmission protocols have already tried their best to avoid transmission collisions, the number of iterative concurrent transmissions should be fundamentally decreased. Inspired by the fact that raw data are often generated unevenly among devices, those devices with a small proportion of data could be properly excluded since they have little effect on the convergence of models. To further guarantee the accuracy of models, we propose to properly select a subset of devices as participants to ensure the given proportion of involved data. Correspondingly, we propose to minimize the risk against the transmission failures during model updates. Afterwards, we design a randomized algorithm ($ran$ RFL) to choose suitable participants by using a series of delicately calculated probabilities, and prove that the result is concentrated on its optimum with high probability. Extensive simulations show that through delicate participant selection, $ran$ RFL decreases the maximal error rate of model updates by up to 38.3% compared with the state-of-the-art schemas.
Highlights
Various mobile devices [1], like smart phones and portable tablets, continuously produce diverse user data during users’ daily usages of applications [2], including the ClickLogs [3], the user trajectories from GPS and so on
To decrease the high risk of transmission failures over shared mobile networks for model updates, we propose to select suitable devices as participants in federated learning, considering the number of concurrent transmissions as well as the proportion of involved data
Extensive simulations show that Randomized Algorithm for RFL] ≤ OPT (RFL) (ranRFL) decreases the error rate of model updates by up to 38.3% compared with the state-of-the-art schemas
Summary
Various mobile devices [1], like smart phones and portable tablets, continuously produce diverse user data during users’ daily usages of applications [2], including the ClickLogs [3], the user trajectories from GPS and so on. Some works have already studied collision/error based re-transmission mechanisms [14]–[16], as well as utilization based transmission protocols [13], [17]–[20] to make best efforts to transfer data upon shared links or channels, either shared links or network devices are overloaded, resulting in a high risk of failures [21] during model updates. To decrease the high risk of transmission failures over shared mobile networks for model updates, we propose to select suitable devices as participants in federated learning, considering the number of concurrent transmissions as well as the proportion of involved data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.