Abstract

Unsupervised Time Series Domain Adaptation (UTSDA) is a method for transferring information from a labeled source domain to an unlabeled target domain. The majority of existing UTSDA approaches focus on learning a domain-invariant feature space by reducing the gap between domains. However, the single-task representation learning methods have limited expressive capability, while ignoring the distinctive season-related and trend-related domain-invariant mechanisms across different domains. To address this, we introduce a novel approach, distinct from existing methods, through a theoretical analysis of UTSDA from the perspective of causal inference. This analysis establishes a solid theoretical foundation for identifying and modeling such consistent domain-invariant mechanisms, which is a significant advancement in the field. As a solution, we introduce MDLR, a multi-task disentangled learning framework designed for UTSDA. MDLR utilizes a dual-tower architecture with a trend feature extractor (TFE) and a season feature extractor (SFE) to extract trend-related and season-related information. This approach ensures that domain-invariant features at different scales can be better represented. Additionally, MDLR is designed with two tasks: a label classifier and a domain classifier, enabling iterative training of the entire model. The experiments conducted on three datasets, namely UCIHAR, WISDM, and HHAR_SA, along with visualization results, have shown the effectiveness of the proposed approach. The source code for our MDLR model is available to the public at https://github.com/MoranCoder95/MDLR/.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call