Abstract
Deep neural networks (DNNs) have made significant advances in computer vision and sensor-based smart sensing. DNNs achieve prominent results based on standard data sets and powerful servers, whereas, in real applications with domain-shift data and resource-constrained environments such as Internet-of-Things (IoT) devices in the edge computing, DNNs are likely to have degraded performance in terms of accuracy and efficiency. To this end, we develop the MobileDA framework that learns transferable features while keeping the simple structure of the deep model. Our method allows a novel teacher network trained in the server to distill the knowledge for a student network running in the edge device, which is achieved by a cross-domain distillation. Leveraging unlabeled data in the new environment, our student model amends the feature learning to be domain invariant, then being our objective model running in the edge device. Our approach is evaluated on a challenging IoT-based WiFi gesture recognition scenario, and three classic visual adaptation benchmarks. The empirical studies corroborate the effectiveness of distillation for domain transfer, and the overall results show that our model achieves state-of-the-art performance merely using a simple network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.