Abstract

Federated learning (FL) is a privacy-preserving collaborative learning framework that can be used in mobile computing where multiple user devices jointly train a deep learning model without uploading their data to a centralized server. An essential issue of FL is to reduce the significant communication overhead during training. Existing FL schemes mostly address this issue regarding single task learning. However, each user generally has multiple related tasks on the mobile device such as multi-content recommendation, and traditional FL schemes need to train an individual model per task which consumes a substantial number of resources. In this work, we formulate an FL problem with multiple personalized tasks, which aims to minimize the communication cost in learning different personalized tasks on each device. To solve the formulated problem, we incorporate multi-task learning into FL which trains a model for multiple tasks concurrently and propose an FL framework named FedMPT. FedMPT modifies the efficient acceleration algorithm and quantization compression strategy delicately to achieve superior performance regarding the communication efficiency. We implement and evaluate FedMPT on two datasets, Multi-MNIST and CelebA, in the FL environment. Experimental results show that FedMPT significantly outperforms the traditional FL scheme considering communication cost and average accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call