Abstract

As a promising method for central model training on decentralized device data without compromising user privacy, federated learning (FL) is becoming more and more popular in Internet-of-Things (IoT) design. However, due to limited computing and memory resources of devices that restrict the capabilities of hosted deep learning models, existing FL approaches for artificial intelligence IoT (AIoT) applications suffer from inaccurate prediction results. To address this problem, this article presents a collaborative <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Big.Little</i> branch architecture to enable efficient FL for AIoT applications. Inspired by the architecture of BranchyNet which has multiple prediction branches, our approach deploys deep neural network (DNN) models across both cloud and AIoT devices. Our <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Big.Little</i> branch model has two branches, where the big branch is deployed on cloud for strengthened prediction accuracy, and the little branches are used to fit for AIoT devices. When AIoT devices cannot make the prediction with high confidence using local little branches, they will resort to the big branch for further inference. To increase both prediction accuracy and early exit rate of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Big.Little</i> branch model, we propose a two-stage training and coinference scheme, which considers the local characteristics of AIoT scenarios. Comprehensive experiment results obtained from a real AIoT environment demonstrate the efficiency and effectiveness of our approach in terms of prediction accuracy and average inference time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call