Abstract
Artificial intelligence (AI) models are gaining widespread applications in various areas such as the healthcare system, especially robotic surgeries. The output of these models needs to be easily explained to surgeons and other stakeholders. These explanations assist stakeholders or end-users of these AI models in establishing trust and understanding the output of the model. However, there are identified limitations in fully implementing these AI models, particularly in critical areas such as robotic surgeries. This is mainly due to the complexity of its results, patient safety, and growing security concerns. Thus, explainable AI (XAI) aims to bridge the gap in understanding the results of AI models. Toward this end, this chapter provides an overview of the current applications, importance, and limitations of XAI robotic-assisted surgeries in the medical decision support system (MDSS). The chapter discusses the privacy and security concerns of patients while utilizing XAI techniques in robotic surgeries. The chapter also explores current trends and issues regarding the future deployment of XAI robotic-assisted surgeries in supporting medical decision-making systems. Finally, the chapter addresses the limitations of machine learning (ML) tools used for robotic surgeries.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have