Abstract

Sentence representation is one of the most fundamental research topics in natural language processing (NLP), as its quality directly affects various downstream task performances. Recent studies for sentence representations have established state-of-the-art (SOTA) performance on semantic representation tasks. However, embeddings by those approaches share unsatisfying transferability when applied to various specific applications. Seldom work studies the transferability of semantic sentence embeddings. In this paper, we first explore the transferability characteristic of the sentence embeddings, and present BlendCSE, a new sentence embedding model targeting rich semantics and transferability. BlendCSE blends three recent advanced NLP learning methodologies, namely, continue learning on masked language modeling (MLM), contrastive learning (CL) with data augmentations (DA), and semantic supervised learning. The main objectives of BlendCSE are to capture token/word level information, diversified linguistic properties, and sentence semantics, respectively. Empirical studies demonstrate that BlendCSE captures semantics comparably well on STS tasks, yet surpasses existing methods on various transfer tasks, yielding even stronger transferability on document-level applications. Ablation studies verified that the three learning objectives synergy well to capture semantics and transferability effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call