Abstract

This article expands on recent studies of machine learning or artificial intelligence (AI) algorithms that crucially depend on benchmark datasets, often called 'ground truths.' These ground-truth datasets gather input-data and output-targets, thereby establishing what can be retrieved computationally and evaluated statistically. I explore the case of the Tumor nEoantigen SeLection Alliance (TESLA), a consortium-based ground-truthing project in personalized cancer immunotherapy, where the 'truth' of the targets-immunogenic neoantigens-to be retrieved by the would-be AI algorithms depended on a broad technoscientific network whose setting up implied important organizational and material infrastructures. The study shows that instead of grounding an undisputable 'truth', the TESLA endeavor ended up establishing a contestable reference, the biology of neoantigens and how to measure their immunogenicity having slightly evolved alongside this four-year project. However, even if this controversy played down the scope of the TESLA ground truth, it did not discredit the whole undertaking. The magnitude of the technoscientific efforts that the TESLA project set into motion and the needs it ultimately succeeded in filling for the scientific and industrial community counterbalanced its metrological uncertainties, effectively instituting its contestable representation of 'true' neoantigens within the field of personalized cancer immunotherapy (at least temporarily). More generally, this case study indicates that the enforcement of ground truths, and what it leaves out, is a necessary condition to enable AI technologies in personalized medicine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call