Abstract

Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection. Discovering these subtypes in noisy, high dimensional biomedical data is often impossible for humans and challenging for machines. We introduce a new approach to facilitate the discovery of disease subtypes: Instead of analyzing the original data, we train a diagnostic classifier (healthy vs. diseased) and extract instance-wise explanations for the classifier’s decisions. The distribution of instances in the explanation space of our diagnostic classifier amplifies the different reasons for belonging to the same class–resulting in a representation that is uniquely useful for discovering latent subtypes. We compare our ability to recover subtypes via cluster analysis on model explanations to classical cluster analysis on the original data. In multiple datasets with known ground-truth subclasses, particularly on UK Biobank brain imaging data and transcriptome data from the Cancer Genome Atlas, we show that cluster analysis on model explanations substantially outperforms the classical approach. While we believe clustering in explanation space to be particularly valuable for inferring disease subtypes, the method is more general and applicable to any kind of sub-type identification.

Highlights

  • Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection

  • We propose a novel space that we believe to be useful for identifying latent subtypes: the space of explanations corresponding to a diagnostic classifier

  • We argue that the explanation space of a diagnostic classifier is an appropriate embedding space for subsequent cluster analyses aimed at the discovery of latent disease subtypes

Read more

Summary

Introduction

Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection Discovering these subtypes in noisy, high dimensional biomedical data is often impossible for humans and challenging for machines. Recent interest in explaining the output of complex machine learning models has been characterized by a wide range of a­ pproaches[8, 9], most of them focused on providing an instance-wise explanation of a model’s output as either a subset of input f­eatures[10, 11], or a weighting of input f­eatures[12, 13] The latter, where each input feature is weighted according to its contribution to the underlying model’s output for an instance, can be thought of as specifying a transformation from feature space to an explanation space. In the case of a diagnostic classifier (healthy vs. diseased), the explanation space relates to the investigated disease

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.