AbstractArtificial intelligence (AI) is increasingly integrating into our society. University education needs to maintain its relevance in an AI‐mediated world, but the higher education sector is only beginning to engage deeply with the implications of AI within society. We define AI according to a relational epistemology, where, in the context of a particular interaction, a computational artefact provides a judgement about an optimal course of action and that this judgement cannot be traced. Therefore, by definition, AI must always act as a ‘black box’. Rather than seeking to explain ‘black boxes’, we argue that a pedagogy for an AI‐mediated world involves learning to work with opaque, partial and ambiguous situations, which reflect the entangled relationships between people and technologies. Such a pedagogy asks learners locate AI as socially bounded, where AI is always understood within the contexts of its use. We outline two particular approaches to achieve this: (a) orienting students to quality standards that surround AIs, what might be called the tacit and explicit ‘rules of the game’; and (b) providing meaningful interactions with AI systems. Practitioner notesWhat is already known about this topic Artificial intelligence (AI) is conceptualised in many different ways but is rarely defined in the higher education literature. Experts have outlined a range of graduate capabilities for working in a world of AI such as teamwork or ethical thinking. The higher education literature outlines an imperative need to respond to AI, as underlined by recent commentary on ChatGPT. What this paper adds A definition of an AI that is relational: A particular interaction where a computational artefact provides a judgement about an optimal course of action, which cannot be easily traced. Focusing on working with AI black boxes rather than trying to see inside the technology. Describing a pedagogy for an AI‐mediated world that promotes working in complex situations with partial and indeterminate information. Implications for practice and/or policy Focusing on quality standards helps learners understand the social regulating boundaries around AI. Promoting learner interactions with AI as part of a sociotechnical ensemble helps build evaluative judgement in weighting AI's contribution to work. Asking learners to work with AI systems prompts understanding of the evaluative, ethical and practical necessities of working with a black box.
Read full abstract