Abstract

There are situations where data relevant to machine learning problems are distributed across multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons. Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing. Federated Learning (FL) is a promising approach to learn a joint model over all the available data across silos. In many cases, the sites participating in a federation have different data distributions and computational capabilities. In these heterogeneous environments existing approaches exhibit poor performance: synchronous FL protocols are communication efficient, but have slow learning convergence and high energy cost; conversely, asynchronous FL protocols have faster convergence with lower energy cost, but higher communication. In this work, we introduce a novel energy-efficient Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence. We show through extensive experiments over established benchmark datasets in the computer-vision domain as well as in real-world biomedical settings that our approach significantly outperforms previous work in data and computationally heterogeneous environments .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call