Abstract

Machine reading comprehension (MRC) is a field of question-answering in which computers understand given passages and answer related questions. Several previous models have tried to combine the use of linguistic and word embedding features to improve the performance of MRC; however, they could not obtain successful results because of feature interference problems caused by simple concatenation of the two. To resolve these problems, a machine reading comprehension model called gated feature network (GF-Net) is proposed in which linguistic features are selectively used according to their roles in the process of answer selection. In the GF-Net, the weights of the linguistic features are automatically controlled through gate mechanisms called feature gates. In the experiments with Stanford Question Answering Dataset SQuAD, the MRC models with feature gates showed a 0.67%p higher average of exact match (EM) and 0.64%p higher average of F1-score than models without feature gates. In addition, the GF-Net outperformed the previous MRC models to which feature gates were added. Based on these experimental results, it is concluded that the gate mechanism can contribute to an improvement in the performance of MRC models and the architecture of the GF-Net is suitable for the task of MRC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call