Explainable AI (XAI) is pivotal for understanding complex ’black-box’ models, particularly in text analysis, where transparency is essential yet challenging. This paper introduces SIDU-TXT, an adaptation of the ’Similarity Difference and Uniqueness’ (SIDU) method, originally applied in image classification, to textual data. SIDU-TXT generates word-level heatmaps using feature activation maps, highlighting contextually important textual elements for model predictions. Given the absence of a unified standard for assessing XAI methods, to evaluate SIDU-TXT, we implement a comprehensive three-tiered evaluation framework – Functionally-Grounded, Human-Grounded, and Application-Grounded – across varied experimental setups. Our findings show SIDU-TXT’s effectiveness in sentiment analysis, outperforming benchmarks like Grad-CAM and LIME in both Functionally and Human-Grounded assessments. In a legal domain application involving complex asylum decision-making, SIDU-TXT displays competitive but not conclusive results, underscoring the nuanced expectations of domain experts. This work advances the field by offering a methodical holistic approach to XAI evaluation in NLP, urging further research to bridge the existing gap in expert expectations and refine interpretability methods for intricate applications. The study underscores the critical role of extensive evaluations in fostering AI technologies that are not only technically faithful to the model but also comprehensible and trustworthy for end-users.