Abstract
Transformer-based language models have advanced Natural Language Processing (NLP), achieving state-of-the-art results across various tasks. However, their complex architectures often obscure the decision-making processes, making transparency a critical challenge, especially in sensitive applications. The emerging field of eXplainable Artificial Intelligence (XAI) seeks to address this by enhancing model transparency. Nonetheless, the primary focus of XAI has largely been on high-resource languages, neglecting low-resource ones such as Arabic. In this paper, we first show the importance of studying XAI in the Arabic language. We then detail our methodology, which involves adapting AraBERT and AraGPT models to specific tasks including Arabic sentiment analysis and semantic question similarity. Then, we conduct an empirical study to evaluate various XAI methods, specifically gradient-based and perturbation-based approaches. These methods are assessed using two key metrics: faithfulness and plausibility. Our findings suggest that while gradient-based methods are more faithful, perturbation-based methods align better with human judgment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.