Abstract
Question answering (QA) is a sub-field of Natural Language Processing (NLP) that focuses on developing systems capable of answering natural language queries. Within this domain, multi-hop question answering represents an advanced QA task that requires gathering and reasoning over multiple pieces of information from diverse sources or passages. To handle the complexity of multi-hop questions, question decomposition has been proven to be a valuable approach. This technique involves breaking down complex questions into simpler sub-questions, reducing the complexity of the problem. However, it’s worth noting that existing question decomposition methods often rely on training data, which may not always be readily available for low-resource languages or specialized domains. To address this issue, we propose a novel approach that utilizes pre-trained masked language models to score decomposition candidates in a zero-shot manner. The method involves generating decomposition candidates, scoring them using a pseudo-log likelihood estimation, and ranking them based on their scores. To evaluate the efficacy of the decomposition process, we conducted experiments on two datasets annotated on decomposition in two different languages, Arabic and English. Subsequently, we integrated our approach into a complete QA system and conducted a reading comprehension performance evaluation on the HotpotQA dataset. The obtained results emphasize that while the system exhibited a small drop in performance, it still maintained a significant advance compared to the baseline model. The proposed approach highlights the efficiency of the language model scoring technique in complex reasoning tasks such as multi-hop question decomposition.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have