Abstract

Federated Learning (FL) has achieved great success in many intelligent applications of the Internet of Vehicles (IoV), however, a large number of vehicles and increasingly size of models bring challenges to FL-empowered connected vehicles. Federated Distillation (FD) has become a novel paradigm to address communication bottlenecks by exchanging model outputs among devices rather than model parameters. In this paper we investigate several key factors that affect the communication efficiency of FD, including communication frequency, soft-labels quantization, and coding methods. Based on the findings of the preceding analysis, we propose FedDQ, a communication-efficient federated distillation method. Specifically, we propose a controlled averaging algorithm based on control variates to solve the drift problem arising from local updates. Then, we design a new quantization approach and coding method to reduce overhead for both upstream and downstream communications. Extensive experiments on image classification tasks at different levels of data heterogeneity show that our method can reduce the amount of communication required to achieve a fixed performance target by around 2 or 3 orders of magnitude compared to benchmark methods while achieving equivalent or higher classification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call