Abstract

Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand Deep Learning holds promise to positively impact clinical practice by leveraging medical data to assist diagnosis and prediction, including mortality prediction. However, as the question of whether powerful Deep Learning models attend correlations backed by sound medical knowledge when generating predictions remains open, additional interpretability tools are needed to foster trust and encourage the use of AI by clinicians. In this work we show an interpretable Deep Learning model trained on MIMIC-III to predict mortality inside the ICU using raw nursing notes, together with visual explanations for word importance based on the Shapley Value. Our model reaches a ROC of 0.8629 (±0.0058), outperforming the traditional SAPS-II score and a LSTM recurrent neural network baseline while providing enhanced interpretability when compared with similar Deep Learning approaches. Supporting code can be found at https://github.com/williamcaicedo/ISeeU2.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.