Abstract
Question Answering demands a deep understanding of semantic relations among question, answer, and context. Multi-Task Learning (MTL) and Meta Learning with deep neural networks have recently shown impressive performance in many Natural Language Processing (NLP) tasks, particularly when there is inadequate data for training. But a little work has been done for a general NLP architecture that spans over many NLP tasks. In this paper, we present a model that can generalize to ten different NLP tasks. We demonstrate that multi-pointer-generator decoder and pre-trained language model is key to success and suppress all previous state-of-the-art baselines by 74 decaScore which is more than 12% absolute improvement over all of the datasets.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have