Abstract

Federated learning has been recently proposed for many clients to collaboratively train a machine learning model in a privacy-preserving manner. However, it also amplifies the difficulty in designing good neural network architecture, especially considering heterogeneous mobile devices. To this end, we propose a novel neural architecture search algorithm, namely FedNAS, which can automatically generate a set of optimal models under federated settings. The main idea is to decouple the two primary steps of the NAS process, i.e, model search and model training, and separately distribute them on the cloud and devices. It also tackles the primary challenge of limited on-device computational and communication resources through its novel designs: FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process and incorporates three key optimizations: parallel candidate training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Evaluated on typical CNN architectures and large-scale datasets, FedNAS is able to achieve comparable model accuracy as a state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to 200_ or more compared to a straightforward design of federated NAS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call