Abstract

Word embeddings which is real-valued word representation able to capture lexical semantics plays a crucial role in machine reading comprehension tasks because the first step is the embedding of the question and the passage. One of the frequently-used models is Word2Vec, but several fashionable competitors have been proposed in recent years, including GloVe and Fasttext. However the question which word embedding model really performs best and is most suitable across different reading comprehension tasks remains unanswered to this date. In this paper we performed the first extrinsic empirical evaluation of three word embeddings across four types of tasks: Multiple Choice, Cloze, Answer Extraction and Conversation. The experiments showed that GloVe and Fasttext have their own strengths in different type of tasks: The accuracy of Multiple Choice task improves significantly when leveraging GloVe, Fasttext is a bit more suitable for Answer Extraction task, and GloVe performs similarly with Fasttext in Cloze task and Conversation task. Finally, we find that Word2Vec has been demoded compared with GloVe and Fasttext in all tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.