Abstract

This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be.

Highlights

  • Trust is all the rage in thinking about artificial intelligence (AI) and society

  • We find an overwhelming emphasis on the conditions for developing AI systems and unlocking markets, which conflates the interests and agendas of AI users, experts, and regulators, with those of technology vendors, venture capitalists, and financial backers, whose interests in advancing AI might be diametrically opposed

  • This emphasis has the effect of obfuscating the conditions under which trust between individuals and AI systems is developed and earned

Read more

Summary

Introduction

Trust is all the rage in thinking about artificial intelligence (AI) and society. It dominates international principles and guidelines developed by the private sector (IBM 2018) and by multilateral organizations (OECD 2019). Just as social media platforms wither away when they are not being fed with the interactions of a critical mass of people, AI will not develop sufficient intelligence—will remain dumb and unreliable—when left unused Such “bad AI” can lead into a vicious circle in which already lacking public trust is damaged further by the insufficient number of interactions it receives (Reisman et al 2018). The coupling of strong incentives to advance AI systems with the inherent uncertainty surrounding AI’s societal impact leads to a polarized dynamic in which a skeptical public (Anderson et al 2018) is matched by “AI cheerleaders” working to normalize socio-technical systems on behalf of private interests (Bourne 2019) The risk that this polarization can slide into paternalistic models of technological governance is very high (Cardullo and Kitchin 2019; Oravec 2019) and should be taken seriously. The fifth section concludes with an assessment of what trust has come to mean in the contemporary discourse and offers critical suggestions about how to counter the biases that this discourse produces to safeguard the common good and public well-being

Theoretical background and method
What we talk about when we talk about trust
Problems with trust
Trust is a resource
Trust is an instrument
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.