Abstract

It is prevalent for a mobile edge device to conduct local inference using a compact machine learning model, which achieves lower latency and less compromise of data privacy as compared to cloud-based inference. To work in a new environment, the compact model needs to be adapted to the target data from the environment so as to maintain a high inference accuracy. However, directly applying domain adaptation to the compact model leads to a low inference accuracy. Hence, a scheme called memory-efficient collaborative domain adaptation (MEC-DA) is developed in this paper to boost the compact model's inference accuracy on the target data while preserving data privacy. It first deploys a large model to the mobile edge devices where domain adaptation is conducted to adapt the large model to the target data. This process requires training of the large model, which causes high memory consumption. A new method called lite residual hypothesis transfer (LRHT) is thus designed to achieve memory-efficient domain adaptation. The knowledge of the large model is then transferred to the compact model via knowledge distillation. To prevent the compact model from forgetting the knowledge of the source data, a collaborative knowledge distillation (Co-KD) method is developed to unify the source data on the server and the target data on an edge device to update the compact model. MEC-DA can protect data privacy and handle user mobility properly via secure aggregation and user selection, respectively. Extensive experiments on several tasks of object recognition show that MEC-DA improves the inference accuracy by up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$12.5\%$</tex-math></inline-formula> , as compared to the state-of-the-art schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call