Abstract

ABSTRACTWith an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? Finally, human–AI trust is criticized based on a conception of human–human trust, which does not recognize the distinctiveness of the human–AI relationship. This article aims to refute these counterarguments based on the genealogical analyses of ‘trust’ and ‘trustworthiness’ of Karen Jones and Thomas Simpson, who show that trust and trustworthiness help to overcome vulnerabilities. This function of trust gives reason to use human–human trust as a standard. For this function, it is important that trustees are responsive to trust. While animals and institutions could be responsive, narrow AI systems are unable to be responsive to trust. Therefore, we should not apply trust to AI and instead direct our trust to those who can be responsive to and held responsible for our trust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.