Abstract

Mobile devices can generate a tremendous amount of unique data, and thus, create countless opportunities for deep learning tasks. Due to the concerns of data privacy, it is often impractical to log all the data to a central server for training a satisfactory model. In federated learning, the participating devices can train a shared global model collaboratively while keeping their data locally. However, it is not a trivial task to train the deep neural networks (DNNs) with millions and billions of parameters on resource-constrained mobile devices in a federated manner. We replace each fully connected (FC) layer with two low-rank projection matrices to compact the DNNs model, and establish a global error function to recover the outputs of the compressed DNNs model. Then, we design a communication-efficient federated optimation Algorithm to reduce communication cost further. Considering that the heterogeneous devices may run different models at the same time, we devise three different training patterns to integrate the heterogeneous devices running different models. We conduct extensive experiments on both independently identically distribution (IID) and non-IID data sets. The experimental results demonstrate that the proposed framework can significantly reduce the number of parameters and communication cost while maintaining performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call