Abstract

Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive useful semantic sentence embeddings for some tasks. Paraphrase pairs offer an effective way of learning the distinction between semantics and syntax, as they naturally share semantics and often vary in syntax. In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models. ParaBART is trained to perform syntax-guided paraphrasing, based on a source sentence that shares semantics with the target paraphrase, and a parse tree that specifies the target syntax. In this way, ParaBART learns disentangled semantic and syntactic representations from their respective inputs with separate encoders. Experiments in English show that ParaBART outperforms state-of-the-art sentence embedding models on unsupervised semantic similarity tasks. Additionally, we show that our approach can effectively remove syntactic information from semantic sentence embeddings, leading to better robustness against syntactic variation on downstream semantic tasks.

Highlights

  • Recent years have seen huge success of pretrained language models across a wide range of NLP tasks (Devlin et al, 2019; Lewis et al, 2020)

  • The results suggest that the semantic sentence embeddings learned by ParaBART contain less syntactic information

  • We present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings from pre-trained language models

Read more

Summary

Related Work

Various sentence embedding models have been proposed in recent years. Most of these models utilize supervision from parallel data (Wieting and Gimpel, 2018; Artetxe and Schwenk, 2019b; Wieting et al, 2019, 2020), natural language inference data (Conneau et al, 2017; Cer et al, 2018; Reimers and Gurevych, 2019), or a combination of both (Subramanian et al, 2018). Many efforts towards controlled text generation have been focused on learning disentangled sentence representations (Hu et al, 2017; Fu et al, 2018; John et al, 2019).

Proposed Model – ParaBART
Experiments
Syntactic Probing
Robustness Against Syntactic Variation
Conclusion
A Implementation Details

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.