Abstract

The sudden and meteoric rise of generative Artificial Intelligence (genAI) has raised fundamental concerns for universities. Using Bacchi’s methodology on ‘problematisation’, we analyse which concerns Danish universities have addressed through their policies and guidelines. We identify three key problematisations: assessment integrity, legality of data and veracity. While each of these problematisations involves specific limitations, together they also strongly emphasise symbolic and epistemological issues and consequently mostly ignore the materiality of genAI, for example, in terms of labour and energy use. Drawing on critical AI studies, this article argues that universities should also consider the huge planetarycosts that (gen)AI poses as well as the full range of AI’s exploitative business models and practices. Universities should integrate these considerations into both their decision-making on (not) using certain technologies and their policies and guidelines for research and teaching, just as sustainability is already a criterion in their travel or investment policies today.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.