Abstract

Recent progress in deep learning techniques enabled collaborative edge training, which usually deploys identical neural network models globally on multiple devices for aggregating parameter updates over distributed data collection. However, as more and more heterogeneous edge devices are involved in practical training, the identical model deployment over collaborative edge devices cannot be guaranteed: On one hand, the weak edge devices with less computation resources may not catch up stronger ones’ training progress, and appropriate local model training customization is necessary to balance the collaboration. On the other hand, a particular local edge device may have specific learning task preference, while the global identical model would exceed the practical local demand and cause unnecessary computation cost. Therefore, we explored the collaborative learning with heterogeneous convolutional neural networks (CNNs) in this work, expecting to address aforementioned real problems. Specifically, we proposed a novel decentralized collaborative training method by decoupling a training target CNN model into independently trainable sub-models correspond to a sub-set of learning tasks for each edge device. After sub-models are well-trained on edge nodes, the model parameters for individual learning tasks can be harvested from local models on every edge device and ensemble the global training model back to a single piece. Experiments demonstrate that, for the AlexNet and VGG on the CIFAR10, CIFAR100 and KWS dataset, our decentralized training method can save up to 11.8× less computation load while achieve central sever test accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.