Abstract

Currently, research on question answering (QA) with deep learning methods is a hotspot in natural language processing. In addition, most of the research mainly focused on English or Chinese since there are large-scale open corpora, such as WikiQA or DoubanQA. However, how to use deep learning methods to QA of the low resource languages, like Tibetan becomes a challenge. In this paper, we propose a hybrid network model for the Tibetan QA, which combines the convolutional neural network and long short memory network (LSTM) to extract effective features from small-scale corpora. Meanwhile, since the strong grammar rules of Tibetan, we use the language model to decode the output of the LSTM layer which makes the answer more accurate and smoother. In addition, we add the batch normalization to accelerate deep network training and prevent overfitting. Finally, the experiments show that the ACC@1 value of the proposed model in Tibetan QA is 126.2% higher than the baseline model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call