In recent years, we have witnessed both artificial intelligence obtaining remarkable results in clinical decision support systems (CDSSs) and explainable artificial intelligence (XAI) improving the interpretability of these models. In turn, this fosters the adoption by medical personnel and improves trustworthiness of CDSSs. Among others, counterfactual explanations prove to be one such XAI technique particularly suitable for the healthcare domain due to its ease of interpretation, even for less technically proficient staff. However, the generation of high-quality counterfactuals relies on generative models for guidance. Unfortunately, training such models requires a huge amount of data that is beyond the means of ordinary hospitals. In this paper, we therefore propose to use federated learning to allow multiple hospitals to jointly train such generative models while maintaining full data privacy. We demonstrate the superiority of our approach compared to locally generated counterfactuals. Moreover, we prove that generative models for counterfactual generation that are trained using federated learning in a suitable environment perform only marginally worse compared to centrally trained ones while offering the benefit of data privacy preservation. Finally, we integrate our method into a prototypical CDSS for treatment recommendation for sepsis patients, thus providing a proof of concept for real-world application as well as insights and sanity checks from clinical application.
Read full abstract