Abstract

Sentence embedding is an influential research topic in natural language processing (NLP). Generation of sentence vectors that reflect the intrinsic meaning of sentences is crucial for improving performance in various NLP tasks. Therefore, numerous supervised and unsupervised sentence-representation approaches have been proposed since the advent of the distributed representation of words. These approaches have been evaluated on semantic textual similarity (STS) tasks designed to measure the degree of semantic information preservation; neural network-based supervised embedding models typically deliver state-of-the-art performance. However, these models have limitations in that they have numerous learnable parameters and thus require large amounts of specific types of labeled training data. Pretrained language model-based approaches, which have become a predominant trend in the NLP field, alleviate this issue to some extent; however, it is still necessary to collect sufficient labeled data for the fine-tuning process is still necessary. Herein, we propose an efficient approach that learns a transition matrix tuning a sentence embedding vector to capture the latent semantic meaning. Our proposed method has two practical advantages: (1) it can be applied to any sentence embedding method, and (2) it can deliver robust performance in STS tasks with only a few training examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call