Abstract

Distributed machine learning such as split learning can train a model using data on mobile devices while protecting privacy. However, its training time is impractical when the number of devices is large, and the model performance will decrease greatly if data are non-iid. To solve these problems, an efficient parallel split learning algorithm is proposed. Specifically, the parallel algorithm with a distillation loss function instead of parameter synchronization reduces the training time without losing the accuracy. And an incentive mechanism based on Stackelberg Game is designed to adapt to the training environment with non-iid mobile data. The experiments on the CIFAR-10 dataset demonstrate the superior performance of the proposed algorithm in terms of training time and model accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call