Abstract

Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot results for a wide range of text classification tasks. It is also a promising direction to improve data efficiency in generative settings, but there are several challenges to using a combination of task descriptions and example-based learning for text generation. In particular, it is crucial to find task descriptions that are easy to understand for the pretrained model and to ensure that it actually makes good use of them; furthermore, effective measures against overfitting have to be implemented. In this paper, we show how these challenges can be tackled: We introduce GenPET, a method for text generation that is based on pattern-exploiting training, a recent approach for combining textual instructions with supervised learning that only works for classification tasks. On several summarization and headline generation datasets, GenPET gives consistent improvements over strong baselines in few-shot settings.

Highlights

  • Ables them to solve some tasks in a fully unsupervised fashion

  • We show how these challenges can be tackled: We introduce GENPET, a method for text generation that is based on pattern-exploiting training, a recent approach for combining textual instructions with supervised learning that only works for classification tasks

  • We evaluate our approach on a diverse set of six English headline generation and text summarization tasks both in zero-shot and few-shot settings and show that PEGASUS trained with GENPET clearly outperforms regular finetuning

Read more

Summary

Pattern-Exploiting Training

Pattern-Exploiting Training (PET, Schick and Schütze (2021a)) is a finetuning method for text classification tasks. Let M be a masked language model, V its vocabulary of tokens and __ ∈ V the mask token; we denote the set of all token sequences as V ∗. Given an input sequence z ∈ V ∗ that contains exactly one mask token, let pM (t | z) denote the probability assigned to t ∈ V by M at the masked position in z. We introduce GENPET, our method for finetuning language models with instructions for text generation. Notation Let P be a pattern, x ∈ X and y ∈ Y input and output text sequences, and z = P (x) the result of applying P to x, i.e., a text sequence containing a single mask token. I=k (3) the probability that M assigns to the remaining sequence yk:n if the prefix y1:k−1 was already processed with the decoder

Using a Single Instruction
Combining Instructions
Conclusion
Findings
A Analysis
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.