Abstract

Accurate computational models for clinical decision support systems require clean and reliable data but, in clinical practice, data are often incomplete. Hence, missing data could arise not only from training datasets but also test datasets which could consist of a single undiagnosed case, an individual. This work addresses the problem of extreme missingness in both training and test data by evaluating multiple imputation and classification workflows based on both diagnostic classification accuracy and computational cost. Extreme missingness is defined as having ∼50% of the total data missing in more than half the data features. In particular, we focus on dementia diagnosis due to long time delays, high variability, high attrition rates and lack of practical data imputation strategies in its diagnostic pathway. We identified and replicated the extreme missingness structure of data from a real-world memory clinic on a larger open dataset, with the original complete data acting as ground truth. Overall, we found that computational cost, but not accuracy, varies widely for various imputation and classification approaches. Particularly, we found that iterative imputation on the training dataset combined with a reduced-feature classification model provides the best approach, in terms of speed and accuracy. Taken together, this work has elucidated important factors to be considered when developing a predictive model for a dementia diagnostic support system.

Highlights

  • The issue of missing data is one of the most ubiquitous concerns in data science [1]

  • Most of the selected Cognitive and Functional Assessment (CFA) were completed by study partners, who accompanied the patients to the study site throughout the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study, as opposed to being completed by the patients themselves (Table I, column 2)

  • We investigated the missingness in our memory clinic data, in order to reproduce the same missingness patterns in the ADNIMERGE data

Read more

Summary

Introduction

The issue of missing data is one of the most ubiquitous concerns in data science [1]. This is the case in clinical and medical data, which frequently has many missing values [2]–[4] (see Fig. 1a for a real-world, routine (i.e. not clinical trial) Alzheimer’s disease (AD) dataset). Computational models must incorporate a strategy (method or combination of methods) for handling missing data as part of their analytical workflow

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call