Abstract

The machine reading comprehension task (MRC) requires the model to answer die questions based on a piece of context. Over the past few years, more and more powerful models have been proposed based on various deep learning techniques. The MRC models based on deep learning is powerful and effective; however, most of them are focusing on changing the neural network structure. Apart from improving on the deep learning architectures, word embeddings are also essential elements for question answering systems and should not be neglected. Even a small improvement in word representation can lead to substantial performance differences in question answering task. The proposed approach comprises two modules that specialize the semantic representation of word representation and then pipe them to use in MRC models. Fundamentally, pre-trained vectors are retrofitted based on semantic lexicons (PPDB, WordNet, and FrameNet) beforehand. Our experiments on the Stanford Question Answering Dataset (SQuAD) reveal that integrating both a single and combined lexicon knowledge yields improvements over the only use of pre-trained embeddings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.