Abstract

Unsupervised domain adaption (UDA), which transfers knowledge from a labeled source domain to an unlabeled target domain, has attracted tremendous attention in many machine learning applications. Recently, there have been attempts to apply domain adaption for sensor time series data, such as human activity recognition and gesture recognition. However, existing methods suffer from some drawbacks that hinder further performance improvement. They often require access to source data or source models during training, which is unavailable in some fields because of privacy protection and storage limit. Typically, the source domains may only provide an application programming interface (API) for the target domain to call. On the other hand, current UDA methods have not considered the temporal consistency and low-signal-to-noise ratio (SNR) of sensor time series. To address the challenges, this article presents a black-box domain adaption framework for sensor time series data (B2TSDA). First, we propose a single/multi-source teacher-student learning framework to distill the knowledge from the source domains to a customized target model. Then we design a new temporal consistency loss by combining an adaptive mask method and dynamic threshold method to maintain consistent temporal information and balance the learning difficulties of different classes. For the multisource black-box domain adaption, we further propose a Shapley-enhanced method to determine the contribution of each source domain. Experimental results on both single-source and multisource domain adaption show that our framework has superior performance compared to other black-box UDA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call