This work tackles the challenge of ranking-based machine reading comprehension (MRC), where a question answering (QA) system generates a ranked list of relevant answers for each question instead of simply extracting a single answer. We highlight the limitations of traditional learning methods in this setting, particularly under limited training data. To address these issues, we propose a novel ranking-inspired learning method that focuses on ranking multiple answer spans instead of single answer extraction. This method leverages lexical overlap as weak supervision to guide the ranking process. We evaluate our approach on the Qur’an Reading Comprehension Dataset (QRCD), a low-resource Arabic dataset over the Holy Qur’an. We employ transfer learning with external resources to fine-tune various transformer-based models, mitigating the low-resource challenge. Experimental results demonstrate that our proposed method significantly outperforms standard mechanisms across different models. Furthermore, we show its better alignment with the ranking-based MRC task and the effectiveness of external resources for this low-resource dataset. Our best performing model achieves a state-of-the-art partial Reciprocal Rank (pRR) score of 63.82%, surpassing the previous best-known score of 58.60%. To foster further research, we release code [GitHub repository:github.com/mohammed-elkomy/weakly-supervised-mrc], trained models, and detailed experiments to the community.
Read full abstract