Abstract

Engineering trustworthy artificial intelligence (AI) is important to adoption and appropriate use, but there are challenges to implementing trustworthy AI systems. It is difficult to translate trust studies from the laboratory to the field. It is also difficult to operationalize “trustworthy AI” frameworks and principles to inform the actual development of AI. We address these challenges with an approach based in reported incidents of trust loss “in the wild.” We systematically identified 30 cases of trust loss in the AI Incident Database to gain insight into how and why humans lose trust in AI in various contexts. These factors could be codified into the development cycle in various forms such as checklists and design patterns to manage trust in AI systems and avoid similar incidents in the future. Because it is based in real incidents, this approach offers recommendations that are concrete and actionable for teams addressing real use cases with AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call