Abstract

ABSTRACT This article examines how platform-based AI competitions structure power relations in medical imaging research. It focuses on two leading platforms, Kaggle and Grand Challenge, which provide organisational as well as infrastructural support to run AI competitions. In dialogue with critical AI and platform studies research, we investigate how such competitions are organised – under which infrastructural conditions and by whom – and how this shapes processes of model production and evaluation. To address these concerns, we have collected data from 118 medical image AI competitions on Kaggle and Grand Challenge, organised between January 2017 and May 2022. In addition, a variety of platform boundary resources – platform documentation, competition descriptions, dataset descriptions, and competition leaderboards – have been gathered. The analysis of these materials shows, first, that platforms direct the AI development process by requiring substantial financial resources, defining which institutions can host a competition and under which conditions. Second, competition organisers define dataset diversity and the generalisability of models. As most datasets are constructed with data from hospitals in North America, Western Europe and China, the application of models to different geographical contexts is potentially limited. Finally, competition participants influence model development through the institutional, demographic, and disciplinary contexts in which they operate. Overall, the examination demonstrates the importance of critically interrogating the entire medical AI research pipeline, including the definition of research problems, the construction of datasets as well as model production and evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call