In this editorial we highlight the need for empirical studies into the growing use of artificial intelligence (AI) technology in healthcare and social work settings, especially studies which are theoretically informed by critical social science studies of risk and uncertainty. In setting out the importance of interpretative and critical traditions for research into such AI-oriented forms of risk work, we propose three important conceptual lines of inquiry which empirical studies might follow. First, we sketch ways in which the enactment of AI in healthcare work may be changing how risk is handled amid professional decision-making, and creating new categories of patient/service-user. Patients may be evaluated as being at lower or higher risk depending, respectively, upon their engagement or non-engagement with AI-technologies. These questions of (non-)engagement lead us to consider, second, the trust and distrust dynamics around AI-technologies, exploring the potential inequalities that can emerge as a result of (non) engagement. We then consider drivers of this technological embrace in terms of hope and magical thinking in technological-imaginaries, connecting these cultural tendencies to broader structures of ideology and political-economic interests. We conclude this editorial with a plea to social scientists to be cautious to avoid both techno-optimistic narratives and alarmist warnings regarding the implications of artificial intelligence (AI). Instead, we argue that our focus should be a theoretically informed and detailed examining of how expectations (pertaining to risk, trust, and hope) materialise in practice, particularly in the daily experiences of those who develop and enact AI technologies in care settings.
Read full abstract