Abstract

Hybrid tabular-textual question answering (QA) is a crucial task in natural language processing that involves reasoning and locating answers from various information sources, primarily through numerical reasoning and span extraction. Cur-rent techniques in numerical reasoning often rely on autoregressive models to decode program sequences. However, these methods suffer from exposure bias and error propagation, which can significantly decrease the accuracy of program generation as the decoding process unfolds. To address these challenges, this paper proposes a novel multitasking hybrid tabular-textual question answering (MHTTQA) framework. Instead of generating operators and operands step by step, this framework can independently generate entire program tuples in parallel. This innovative approach solves the problem of error propagation and greatly improves the speed of program generation. The effectiveness of the method is demonstrated through experiments using the ConvFinQA and MultiHiertt datasets. Our proposed model outperforms the strong FinQANet baselines by 7% and 7.2% Exe/Prog Acc and the MT2Net baselines by 20.9% and 9.4% EM/F1. In addition, the program generation rate of our method far exceeds that of the baseline method. Additionally, our non-autoregressive program generation method exhibits greater resilience to an increasing number of numerical reasoning steps, further highlighting the advantages of our proposed framework in the field of hybrid tabular-textual QA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call