Background and Objective:Artificial Intelligence (AI) has shown significant advancements across several industries, including healthcare, using better fusion methodologies, improved data accessibility, and heightened processing capabilities. Although deep learning algorithms excel in challenging scenarios, their widespread adoption presents a challenge due to their need for more transparency. The advent of Explainable AI (XAI) has addressed this limitation by offering AI-driven systems with attributes such as transparency, interpretability, and reliability. The need for transparency in critical sectors like healthcare has prompted an increased academic focus on investigating and understanding these frameworks. The present paper thoroughly examines the latest research and advancements in Explainable Artificial Intelligence (XAI), specifically focusing on its integration into the Internet of Medical Things (IoMT) in healthcare settings. The last section tackles outstanding issues concerning XAI and IoMT and outlines prospective directions for further research. Methods:A thorough investigation was carried out across many scholarly databases and online digital libraries, such as IEEE Xplore, ACM Digital Library, Wiley Interscience, Taylor and Francis, ScienceDirect, Scopus, Springer, Google Scholar, Citeseer Library, Semantic Scholar, and other relevant sources. An analysis was conducted on articles published from March 2004 to February 2024, specifically focusing on AI models that explain different healthcare issues. The search query included “Explainable AI” in conjunction with “Open Black Box, Healthcare, Transparent Models, Interpretable Deep Learning, Machine Learning, Medical Information System, Accountability, Smart Healthcare, and Responsible AI”, indicating the field’s dynamic nature. The authors also examined various techniques and datasets to elucidate healthcare difficulties, including incorporating the Internet of Medical Things (IoMT). Results:The evaluation included more than 105 published models that used clinical data input to diagnose diverse disorders, focusing on incorporating Internet of Medical Things (IoMT) components. In addition to illness diagnosis, these models also elucidate the decisions made. The models were classified according to the input data, used methodologies, and integration of the Internet of Medical Things (IoMT). The categorization included machine learning and deep learning methodologies, particularly on explainability models connected to the Internet of Medical Things (IoMT). These models were classified as model-agnostic, model-specific, global, local, ante-hoc, and post-hoc. Conclusion:This extensive study examines machine learning, and deep learning models developed based on clinical data and associated with the Internet of Medical Things (IoMT) for illness detection. These models improve efficiency and accuracy and provide essential assistance to medical personnel. Explaining each choice made by individuals contributes to the acceleration of sickness detection, resulting in decreased medical expenses for patients and healthcare systems. The integration of Explainable Artificial Intelligence (XAI) and the Internet of Medical Things (IoMT) highlights a fundamental change in healthcare towards more transparent and linked systems, leading to significant breakthroughs in medical decision-making.
Read full abstract