Abstract

Transformer architectures rely on explicit position encodings in order to preserve a notion of word order. In this paper, we argue that existing work does not fully utilize position information. For example, the initial proposal of a sinusoid embedding is fixed and not learnable. In this paper, we first review absolute position embeddings and existing methods for relative position embeddings. We then propose new techniques that encourage increased interaction between query, key and relative position embeddings in the self-attention mechanism. Our most promising approach is a generalization of the absolute position embedding, improving results on SQuAD1.1 compared to previous position embeddings approaches. In addition, we address the inductive property of whether a position embedding can be robust enough to handle long sequences. We demonstrate empirically that our relative position embedding method is reasonably generalized and robust from the inductive perspective. Finally, we show that our proposed method can be adopted as a near drop-in replacement for improving the accuracy of large models with a small computational budget.

Highlights

  • The introduction of BERT (Devlin et al, 2018) has lead to new state-of-the-art results on various downstream tasks such as question answering and passage ranking

  • BERT is non-recurrent and based on self-attention; in order to model the dependency between elements at different positions in the sequence, BERT relies on position embeddings

  • We review the absolute position embedding from Devlin et al (2018) and the relative position embeddings in Shaw et al (2018); Dai et al (2019)

Read more

Summary

Introduction

The introduction of BERT (Devlin et al, 2018) has lead to new state-of-the-art results on various downstream tasks such as question answering and passage ranking. Recent work suggested removing the sentence prediction (NSP) loss with training conducted solely on individual chunks of text (Liu et al, 2019a) In this setup, the notion of absolute positions can be arbitrary depending on chunk start positions. What really matters is the relative position or distance between two tokens ti and tj, which is j − i This phenomena has been realized and the relative position representation has been proposed in Shaw et al (2018); Huang et al (2018), in the context of encoder decoder machine translation and music generation respectively. We review the absolute position embedding from Devlin et al (2018) and the relative position embeddings in Shaw et al (2018); Dai et al (2019).

Related Work
Position Embeddings
Self-Attention review
Absolute position embedding in BERT
Shaw’s relative position embedding
XLNet’s relative position embedding
Proposed position embeddings
Relative position embedding method 1
Relative position embedding method 2
Relative position embedding method 3
Relative position embedding method 4
Complexity Analysis
Experiments
Models evaluation on SQuAD dataset
Model evaluation on GLUE datasets
Models with various k
Relative position embeddings for large BERT models
Relative Position Visualization
Conclusion i j-i
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call