Abstract

Taking several topic words and a math expression as input, the aim of math word problem generation is to generate a problem that can be answered by the given expression and related to these topic words. Considerable progress has been achieved by sequence-to-sequence neural network models in many natural language generation tasks, but these models do not effectively consider the characteristics of the math word problem generation task. They may generate problems that are unrelated to the topic words and expressions, and problems that cannot be solved. In this paper, we propose a new model, MWPGen, for automatically generating math word problems. MWPGen has a topic-expression co-attention mechanism to extract relevant information between topic words and expressions. Further, we fine-tune MWPGen with the solving result of the generated problem as the reward for reinforcement learning. MWPGen shows improved performance in popular automatic evaluation metrics and improves the solvability of generated problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call