Abstract

Building transparent and trustworthy AI-powered systems for disease diagnosis has become more paramount than ever due to a lack of understanding of black box models. A lack of transparency and explainability in AI-driven models can propagate biases and erode patients' and medical practitioners' trust. To answer this challenge, Explainable AI (XAI) is drastically emerging as a practical solution and approach to tackling ethical concerns in the health sector. The overarching purpose of this paper is to highlight the advancement in XAI for tuberculosis diagnosis (TB) and identify the benefits and challenges associated with improved trust in AI-powered TB diagnosis. We explore the potential of XAI in improving TB diagnosis. We attempt to provide a complete plan to promote XAI. We examine the significant problems associated with using XAI in TB diagnosis. We argue that XAI is critical for reliable TB diagnosis by improving the interpretability of AI decision-making processes and recognising possible biases and mistakes. We evaluate techniques and methods for XAI in TB diagnosis and examine the ethical and societal ramifications. By leveraging explainable AI, we can create a more reliable and trustworthy TB diagnostic framework, ultimately improving patient outcomes and global health. Finally, we provide thorough recommendations for developing and implementing XAI in TB diagnosis using X-ray imaging

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.