While tech workers are essential stakeholders in ethical artificial intelligence (AI) development and deployment, they are rarely consulted about their understanding of the development of ethical AI. In light of this, we present the findings of our 2020 to 2021 empirical research study in which we collected data from tech workers in a major AI company to better understand what they consider to be the most pressing ethical issues when developing AI-powered products. While there is a nascent body of literature that examines how AI ethics principles are operationalised on the ground, this study differs in that we explicitly draw on feminist insights to inform our analysis, and have put a particular focus on allowing the voices and narratives of tech workers to lead the work forward. Our study generated three main findings: first, the term ‘bias’ creates real confusion among tech workers, meaning that the term is unable to do the ethical work it is intended to do; second, tech workers do not necessarily see a relationship between diversity, equality and inclusion (DEI) agendas and AI development, undermining AI ethics initiatives; and third, tech workers were particularly concerned about the monitoring and maintenance of unwieldy ‘legacy systems’ that generated serious challenges to creating and deploying new and more ethical AI products. This study thus creates a ‘thicker’ and more nuanced picture of tech workers’ perspectives on the ethical issues that arise when developing and maintaining AI systems, while simultaneously demonstrating the utility of feminist approaches in the field of AI ethics.
Read full abstract