Abstract

Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.

Highlights

  • Artificial intelligence (AI) is not a new field, it has just reached a new ‘spring’ after one of the many ‘winters’ (Boden, 2016; Floridi, 2020)

  • We have identified three core challenges which we review : (i) the opaque nature of machines; (ii) the guarantee of the respect for human agency and control of our autonomous artefacts; and (iii) the link to inequalities both as input to and output of artificial intelligence (AI) systems

  • If we look at two fundamental structural sources of inequalities, such as gender and race, we can see evidence on how AI systems are far from being neutral—let alone fair (Buolamwini & Gebru, 2018; Benjamin, 2019; Edelman et al, 2017; Hu, 2017; Kleinberg et al, 2019; Noble, 2018; Zhang et al, 2021)

Read more

Summary

Introduction

Artificial intelligence (AI) is not a new field, it has just reached a new ‘spring’ after one of the many ‘winters’ (Boden, 2016; Floridi, 2020). As a matter of fact, we might be on the brink of a new winter since different actors (firms, individuals, media and institutions) have concretely started questioning the over-inflated expectations. It may be the multiple ongoing narratives, including the ones of moving from the traditional ‘black-box approach’ to the use of transparent and explainable methods (Guidotti, 2019a, 2019b), the ‘scary’—but improbable—prospects of creating a. Calls for responsible AI are mounting and eventually shedding light on many overseen social cutting issues. It is crucial to stop and think differently about our autonomous systems by considering them AI socio-technical systems, i.e. the combination of the technical component (i.e. the code and—if used—the data) and socio elements

Page 2 of 11
Page 4 of 11
Page 6 of 11
Page 8 of 11
Conclusions and future work
Page 10 of 11
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call