Unsupervised domain adaptation (UDA) addresses the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain. This task is particularly critical for time series data, characterized by unique temporal dynamics. However, existing methods often fail to capture these temporal dependencies, leading to domain discrepancies and loss of semantic information. In this study, we propose a novel framework for the unsupervised domain adaptation of time series (UDATS) that integrates Multimodal Contrastive Adaptation (MCA) and Prototypical Domain Alignment (PDA). MCA leverages image encoding techniques and prompt learning to capture complex temporal patterns while preserving semantic information. PDA constructs multimodal prototypes, combining visual and textual features to align target domain samples accurately. Our framework demonstrates superior performance across various application domains, including human activity recognition, mortality prediction, and fault detection. Experiments show our method effectively addresses domain discrepancies while preserving essential semantic content, outperforming state-of-the-art models.
Read full abstract