Abstract

In the rapidly evolving realm of artificial intelligence (AI), black-box algorithms have exhibited outstanding performance. However, their opaque nature poses challenges in fields like medicine, where the clarity of the decision-making processes is crucial for ensuring trust. Addressing this need, the study aimed to augment these algorithms with explainable AI (XAI) features to enhance transparency. A novel approach was employed, contrasting the decision-making patterns of black-box and white-box models. Where discrepancies were noted, training data were refined to align a white-box model’s decisions closer to its black-box counterpart. Testing this methodology on three distinct medical datasets revealed consistent correlations between the adapted white-box models and their black-box analogs. Notably, integrating this strategy with established methods like local interpretable model-agnostic explanations (LIMEs) and SHapley Additive exPlanations (SHAPs) further enhanced transparency, underscoring the potential value of decision trees as a favored white-box algorithm in medicine due to its inherent explanatory capabilities. The findings highlight a promising path for the integration of the performance of black-box algorithms with the necessity for transparency in critical decision-making domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call