Abstract

In this paper, I argue that the only kind of trust that should be allocated to AI technologies, if any at all, is epistemic trust. Epistemic trust is defined by Wilholt (Br J Philos Sci 64:233–253, 2013) as the kind of trust that is allocated strictly in virtue of the capacities of the receiver as a provider of information or as a conveyer of knowledge. If, as Alvarado (2022, http://philsci-archive.pitt.edu/id/eprint/21243 ) argues, AI is first and foremost designed and deployed as an epistemic technology—a technology that is designed, developed and deployed to particularly and especially expand our capacities as knowers—then it follows that we trust AI, when we trust it, exclusively in its capacities as a provider of information and epistemic enhancer. Trusting it otherwise may betray conceptual confusion. As I will show, it follows that trust in AI cannot be modeled after any general kind of interpersonal trust, it cannot be modeled after trust in other technologies such as pharmaceuticals, and it cannot be modeled after the kind of trust we allocate to medical practitioners in their capacities as providers of care. It also follows that, even after it is established that epistemic trust is the only legitimate kind of trust to allocate to epistemic technologies, whether or not AI can, in fact, be trusted remains an open question.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call