ABSTRACT While AI systems are increasingly assuming roles traditionally occupied by human epistemic authorities (EAs), their epistemological status remains unclear. This paper aims to address this lacuna by assessing the potential for AI systems to be recognized as artificial epistemic authorities. In a first step, I examine the arguments against considering AI systems as EAs, in particular the established model of EAs as engaging in intentional belief transfer via testimony to laypeople – a process seemingly inapplicable to intentionless and beliefless AI. Despite this, AI systems exhibit striking epistemic parallels with human EAs, including epistemic asymmetry and opacity that give rise to comparable challenges for both laypeople and AI users. These challenges include the identification problem – how to recognize reliable EAs/AI systems – and the deference problem – determining the appropriate epistemic stance towards EAs/AI systems. Faced with this dilemma, I discuss three possible solutions: 1. reject the concept of artificial EAs, 2. accept that AI can possess beliefs and intentions and thus align with the standard model, or 3. develop an alternative model that encompasses artificial EAs. I argue that while each option has its benefits and costs, a particularly strong case can be made for option 3.
Read full abstract