Machine reading and comprehension using differentiable reasoning models has recently been studied extensively, and memory networks have demonstrated promising performance on some reasoning tasks such as factual reasoning and basic deduction. However, as a natural language understanding model, memory networks still face challenges on the numeric representations for sentences, particularly the text representation method and the effectiveness of learned vector representations. In this paper, inspired by the convolution mechanism in the computer vision domain, a raw text representation architecture for question answering problem named convolutional end-to-end memory networks(CMemN2N) architecture is proposed. The convolutional architecture of the proposed model allows us to abstract the useful local information for reasoning to get the significant numeric sentence representation passed to the follow-up sub-tasks. Our experiments show that CMemN2N achieves better results on most of the 20 bAbI task dataset, yielding improvements for the average result compared to the state-of-the-art.
Read full abstract