Abstract

Text summarization is a widely-researched problem among scholars in the field of natural language processing. Multiple techniques have been proposed to help tackle this problem, yet some of these methodologies may still exhibit limitations such as the requirements for large training datasets, which might not always be possible, but more importantly, the lack of interpretability or transparency of the model. In this paper, we propose using meta-learning algorithm to train a deep learning model for extractive text summarization and then using various explanatory techniques such as SHAP (Shapley, 1953), linear regression (Lederer, 2022), decision trees (Fürnkranz, 2010), and input modification to gain insights into the model’s decision making process. To evaluate the effectiveness of our approach, we will compare it to other popular natural language processing models like BERT (Miller, 2019) or XLNET (Yang et al., 2020) using the ROUGE metrics (Lin, 2004). Overall, our proposed approach provides a promising solution to the limitations of existing methods and a framework for improving the explainability of deep learning models in other natural language processing tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call