Building transparent and trustworthy AI-powered systems for disease diagnosis has become more paramount than ever due to a lack of understanding of black box models. A lack of transparency and explainability in AI-driven models can propagate biases and erode patients' and medical practitioners' trust. To answer this challenge, Explainable AI (XAI) is drastically emerging as a practical solution and approach to tackling ethical concerns in the health sector. The overarching purpose of this paper is to highlight the advancement in XAI for tuberculosis diagnosis (TB) and identify the benefits and challenges associated with improved trust in AI-powered TB diagnosis. We explore the potential of XAI in improving TB diagnosis. We attempt to provide a complete plan to promote XAI. We examine the significant problems associated with using XAI in TB diagnosis. We argue that XAI is critical for reliable TB diagnosis by improving the interpretability of AI decision-making processes and recognising possible biases and mistakes. We evaluate techniques and methods for XAI in TB diagnosis and examine the ethical and societal ramifications. By leveraging explainable AI, we can create a more reliable and trustworthy TB diagnostic framework, ultimately improving patient outcomes and global health. Finally, we provide thorough recommendations for developing and implementing XAI in TB diagnosis using X-ray imaging