Abstract

This paper presents a deep neural solver to automatically solve math word problems. In contrast to previous statistical learning approaches, we directly translate math word problems to equation templates using a recurrent neural network (RNN) model, without sophisticated feature engineering. We further design a hybrid model that combines the RNN model and a similarity-based retrieval model to achieve additional performance improvement. Experiments conducted on a large dataset show that the RNN model and the hybrid model significantly outperform state-of-the-art statistical learning methods for math word problem solving.

Highlights

  • Developing computer models to automatically solve math word problems has been an interest of NLP researchers since 1963 Feigenbaum et al (1963); Bobrow (1964); Briars and Larkin (1984); Fletcher (1985)

  • Each approach is evaluated on each dataset via 5fold cross-validation: In each run, 4 folds are used for training and 1 fold is used for testing

  • The performance of the hybrid model, seq2seq model, and retrieval model are examined on two datasets respectively

Read more

Summary

Introduction

Developing computer models to automatically solve math word problems has been an interest of NLP researchers since 1963 Feigenbaum et al (1963); Bobrow (1964); Briars and Larkin (1984); Fletcher (1985). Progress has been made in this task, performance of state-of-the-art techniques is still quite low on large datasets having diverse problem types Huang et al (2016). The reader is asked to infer how many pens Dan and Jessica have, based on constraints provided. Given the success of deep neural networks (DNN) on many NLP tasks

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call