Abstract

Explainable artificial intelligence (AI) focuses on developing models and algorithms that provide transparent and interpretable insights into decision-making processes. By elucidating the reasoning behind AI-driven diagnoses and treatment recommendations, explainability can gain the trust of healthcare experts and assist them in difficult diagnostic tasks. Sepsis is characterized as a serious condition that happens when the immune system of the body has an extreme response to an infection, causing tissue and organ damage and leading to death. Physicians face challenges in diagnosing and treating sepsis due to its complex pathogenesis. This work aims to provide an overview of the recent studies that propose explainable AI models in the prediction of sepsis onset and sepsis mortality using intensive care data. The general findings showed that explainable AI can provide the most significant features guiding the decision-making process of the model. Future research will investigate explainability through argumentation theory using intensive care data focused on sepsis patients.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.