Abstract

Most text classification methods achieve great success based on the large-scale annotated data and the pre-trained language models. However, the labeled data is insufficient in practice, and the pre-trained language models are difficult to deploy due to their high computing resources and slow inference speed. In this paper, we propose cross-domain knowledge distillation, where the teacher and student tasks belong to different domains. It not only acquires knowledge from multiple teachers but also accelerates inference and reduces model size. Specifically, we train the pre-trained language models on factual knowledge obtained by aligned Wikipedia text to Wikidata triplets and fine-tune it as the teacher model. Then we use the heterogeneous multi-teacher knowledge distillation to transfer knowledge from the multiple teacher models to the student model. Multi-teacher knowledge vote can distill knowledge related to the target domain. Moreover, we also introduce the teacher assistant to help distill large pre-trained language models. Finally, we reduce the difference between the source domain and target domain by multi-source domain adaptation to solve the domain shift problem. Experiments on the multiple public datasets demonstrate that our method can achieve competitive performance while having fewer parameters and less inference time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.