Abstract

Instantial variation (IV) refers to variation that is due not to population differences or errors, but rather to within-subject variation, that is the intrinsic and characteristic patterns of variation pertaining to a given instance or the measurement process. Although taking into account IV is critical for the proper analysis of the results, this source of uncertainty and its impact on robustness have so far been neglected in Machine Learning (ML). To fill this gap, we look at how IV affects ML performance and generalization, and how its impact can be mitigated. Specifically, we provide a methodological contribution to formalize the problem of IV in the statistical learning framework. To prove the relevance of our contribution, we focus on one of the most critical domains, healthcare, and take individual (analytical and biological) variation as a specific kind of IV; in this domain, we use one of the largest real-world laboratory medicine datasets for the task of COVID-19 detection, to show that: (1) common state-of-the-art ML models are severely impacted by the presence of IV in data; and (2) advanced learning strategies, based on data augmentation and soft computing methods (data imprecisiation), and proper study designs can be effective at improving robustness to IV. Our findings demonstrate the critical relevance of correctly accounting for IV to enable safe deployment of ML in real-world settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call