Abstract

Smooth Embedding and Word Sampling Research Based on Transformer Pointer Generation Network

Highlights

  • Textual reasoning and automatic summarisation aim to get a short sentence which covers the general meaning of long text

  • After 2017, Transformer model started to be used in some kinds of Natural Language Processing (NLP) tasks

  • After reviewing the whole process of Word Sampling, we find that the sensitivity of noise increases while we try to suppress the high frequency words

Read more

Summary

Introduction

Textual reasoning and automatic summarisation aim to get a short sentence which covers the general meaning of long text. Extractive and abstractive approaches are the main methods for textual reasoning and summarisation tasks. Extractive method cannot understand the text and generate ideas like humans [1]. Abstractive method is more suitable for understanding and reasoning tasks. After 2017, Transformer model started to be used in some kinds of Natural Language Processing (NLP) tasks. Transformer gets better performance compared with Seq2Seq in some generation tasks [2]. Nallapati et al created encoder-decoder structure and attention mechanism to address summary tasks and obtained good results in 2015 [7]. Graph-based attention mechanism proposed by Tan and Wan improves the generalization of the model [8]. Nallapati et al use replication mechanism [7] to solve OOV problems [10]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call