Unsupervised domain adaptation methods aim to facilitate learning tasks in unlabeled target domains using labeled information from related source domains. Recently, prompt-tuning has emerged as a powerful instrument to incorporate templates that reformulate input examples into equivalent cloze-style phrases. However, there are still two great challenges for domain adaptation: (1) Existing prompt-tuning methods only rely on the general knowledge distributed in upstream pre-trained language models to alleviate the domain discrepancy. How to incorporate specific features in the source and target domains into prompt-tuning model is still divergent and under-explored; (2) In the prompt-tuning, either the crafted template methods are time-consuming and labor-intensive, or automatic prompt generation methods cannot achieve satisfied performance. To address these issues, in this paper, we propose an innovative Soft Prompt-tuning method for Unsupervised Domain Adaptation via Self-Supervision, which combines two novel ideas: Firstly, instead of only stimulating knowledge distributed in the pre-trained model, we further employ hierarchically clustered optimization strategies in a self-supervised manner to retrieve knowledge for the verbalizer construction in prompt-tuning. Secondly, we construct prompts with the special designed verbalizer that facilitate the transfer of learning representations across domains, which can consider both the automatic template generation and cross-domain classification performance. Extensive experimental results demonstrate that our method even outperforms SOTA baselines that utilize external open knowledge with much less computational time.
Read full abstract