Abstract

Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to the central aggregator for improving the global model. However, a key challenge is to maintain communication efficiency (i.e., the number of communications per iteration) when participating clients implement uncoordinated computation strategy during aggregation of model parameters. We formulate a utility maximization problem to tackle this difficulty, and propose a novel crowdsourcing framework, involving a number of participating clients with local training data to leverage FL. We show the incentive-based interaction between the crowdsourcing platform and the participating client's independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game's equilibria. Further, we illustrate the efficacy of our proposed framework with simulation results. Results show that the proposed mechanism outperforms the heuristic approach with up to 22% gain in the offered reward to attain a level of target accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.