Abstract

Explainability is crucial in domains where system decisions have significant implications for human trust in black-box models. Lack of understanding regarding how these decisions are made hinders the adoption of so-called clinical decision support systems. While neural networks and deep learning methods exhibit impressive performance, they remain less explainable than white-box approaches. Artificial Hydrocarbon Networks (AHN) is an effective black-box model that can be used to support critical clinical decisions if accompanied by explainability mechanisms to instill confidence among clinicians. In this paper, we present a use case involving global and local explanations for AHN models, provided with an automatic procedure so-called eXplainable Artificial Hydrocarbon Networks (XAHN). We apply XAHN to preeclampsia prognosis, enabling interpretability within an accurate black-box model. Our approach involves training a suitable AHN model using the cross-validation with ten repetitions, followed by a comparative analysis against four well-known machine learning techniques. Notably, the AHN model outperformed the others, achieving an F1-score of 74.91%. Additionally, we assess the efficacy of our XAHN explainer through a survey applied to clinicians, evaluating the goodness and satisfaction of the provided explanations. To the best of our knowledge, this work represents one of the earliest attempts to address the explainability challenge in preeclampsia prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call