Abstract

Adapting pre-trained language models (PrLMs) (e.g., BERT) to new domains has gained much attention recently. Instead of fine-tuning PrLMs as done in most previous work, we investigate how to adapt the features of PrLMs to new domains without fine-tuning. We explore unsupervised domain adaptation (UDA) in this paper. With the features from PrLMs, we adapt the models trained with labeled data from the source domain to the unlabeled target domain. Self-training is widely used for UDA, and it predicts pseudo labels on the target domain data for training. However, the predicted pseudo labels inevitably include noise, which will negatively affect training a robust model. To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered. We further extend CFd to a cross-language setting, in which language discrepancy is studied. Experiments on two monolingual and multilingual Amazon review datasets show that CFd can consistently improve the performance of self-training in cross-domain and cross-language settings.

Highlights

  • Pre-trained language models (PrLMs) such as BERT (Devlin et al, 2019) and its variants (Liu et al, 2019c; Yang et al, 2019) have shown significant success for various downstream NLP tasks

  • We study unsupervised domain adaptation (UDA) of pre-trained language models (PrLMs), in which we adapt the models trained with source labeled data to the unlabeled target domain based on the features from PrLMs

  • The features from PrLMs have been proven to be highly discriminative for downstream tasks, so we propose to distill this kind of features to a feature adaptation module (FAM) to make FAM capable of extracting discriminative features (§4.2.1)

Read more

Summary

Introduction

Pre-trained language models (PrLMs) such as BERT (Devlin et al, 2019) and its variants (Liu et al, 2019c; Yang et al, 2019) have shown significant success for various downstream NLP tasks These deep neural networks are sensitive to different cross-domain distributions (QuioneroCandela et al, 2009) and their effectiveness will be much weakened in such a scenario. Methods like ensemble learning (Zou et al, 2019; Ge et al, 2020; Saito et al, 2017) which adopt multiple models to jointly make decisions on pseudolabel selections have been introduced to achieve this goal Though these methods can substantially reduce wrong predictions on the target, there will still be noisy labels in the pseudo-label set, with negative effects on training a robust model, since deep neural networks with their high capacity can fit to corrupted labels (Arpit et al, 2017)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.