Abstract

Despite their success in a variety of NLP tasks, pre-trained language models, due to their heavy reliance on compositionality, fail in effectively capturing the meanings of multiword expressions (MWEs), especially idioms. Therefore, datasets and methods to improve the representation of MWEs are urgently needed. Existing datasets are limited to providing the degree of idiomaticity of expressions along with the literal and, where applicable, (a single) non-literal interpretation of MWEs. This work presents a novel dataset of naturally occurring sentences containing MWEs manually classified into a fine-grained set of meanings, spanning both English and Portuguese. We use this dataset in two tasks designed to test i) a language model's ability to detect idiom usage, and ii) the effectiveness of a language model in generating representations of sentences containing idioms. Our experiments demonstrate that, on the task of detecting idiomatic usage, these models perform reasonably well in the one-shot and few-shot scenarios, but that there is significant scope for improvement in the zero-shot scenario. On the task of representing idiomaticity, we find that pre-training is not always effective, while fine-tuning could provide a sample efficient method of learning representations of sentences containing MWEs.

Highlights

  • Introduction andMotivation phrases are explicitly designed to be compositional both in non-contextual (Mitchell and Lapata, 2010; Mikolov et al, 2013b) and contextual embedding models

  • Our aim was to investigate the performance of statewhere E→c represents the example with the multiword expressions (MWEs) of-the-art transformer-based pre-trained language in E replaced by the paraphrase of the correct models on these tasks, and how their performance meaning associated with the MWE, and E→i the varied with different input features

  • Expected sim(E, E→c) = 1 sim(E, E→i) = sim(E→c, E→i) sim(E, E→c) = 1 sim(E, E→i) = sim(E→c, E→i) of MWE, context sentences), problem setups, and training 1 when the target sentence is followed by the corregimes so as to rect paraphrase and 0 otherwise

Read more

Summary

Introduction

Introduction andMotivation phrases are explicitly designed to be compositional both in non-contextual (Mitchell and Lapata, 2010; Mikolov et al, 2013b) and contextual embedding models. Pre-trained language models in particular exploit compositionality at both the word and sub-word levels (Devlin et al, 2019) to reduce the size of their vocabulary, which makes representing idiomatic phrases challenging. The effective representation of idiomatic MWEs is critical for them to be correctly interpreted in downstream tasks. Such an improvement will benefit both classification-based problems (e.g. sentiment analysis) and sequence-to-sequence tasks (e.g. machine translation). To this end, we present a dataset consisting of naturally occurring sentences containing potentially idiomatic MWEs and two tasks aimed at evaluating language models’ ability to effectively detect and represent idiomaticity. The primary contributions of this work are: 1. A novel dataset consisting of:

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call