Abstract

The use of ensembles in machine learning (ML) has had a considerable impact in increasing the accuracy and stability of predictors. This increase in accuracy has come at the cost of comprehensibility as, by definition, an ensemble model is considerably more complex than its component models. This is of significance for decision support systems in medicine because of the reluctance to use models that are essentially black boxes. Work on making ensembles comprehensible has so far focused on global models that mirror the behaviour of the ensemble as closely as possible. With such global models there is a clear tradeoff between comprehensibility and fidelity. In this paper, we pursue another tack, looking at local comprehensibility where the output of the ensemble is explained on a case-by-case basis. We argue that this meets the requirements of medical decision support systems. The approach presented here identifies the ensemble members that best fit the case in question and presents the behaviour of these in explanation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call