Abstract

AbstractHow do we assess the positive and negative impacts of research about- or research that employs artificial intelligence (AI), and how adequate are existing research governance frameworks for these ends? That concern has seen significant recent attention, with various calls for change, and a plethora of emerging guideline documents across sectors. However, it is not clear what kinds of issues are expressed in research ethics with or on AI at present, nor how resources are drawn on in this process to support the navigation of ethical issues. Research Ethics Committees (RECs) have a well-established history in ethics governance, but there have been concerns about their capacity to adequately govern AI research. However, no study to date has examined the ways that AI-related projects engage with the ethics ecosystem, or its adequacy for this context. This paper analysed a single institution’s ethics applications for research related to AI, applying a socio-material lens to their analysis. Our novel methodology provides an approach to understanding ethics ecosystems across institutions. Our results suggest that existing REC models can effectively support consideration of ethical issues in AI research, we thus propose that any new materials should be embedded in this existing well-established ecosystem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call