Abstract

AbstractCurrent surveys indicate limited public and individual trust in autonomous vehicles despite a long tradition to ensure their (technical) trustworthiness in informatics and systems engineering. To address this trust gap, this article explores the underlying reasons. The article elaborates on the gap between trust understood as a social phenomenon and, in contrast, the research tradition aimed at guaranteeing (technical) trustworthiness. It discusses to what extent those research traditions in the social sciences and humanities have been recognized and reflected in systems engineering research to date. Trust, according to the current state of research in the social sciences and humanities, heavily relies on individual assessments of an autonomous vehicle's abilities, benevolence and integrity. By contrast, technical trustworthiness is defined as the sum of intersubjective, measurable, technical parameters. They describe certain abilities or properties of a system, often according to respective technical standards and norms. This article places the “explainability” of autonomous systems in a bridging role. Explainability can help to conceptualize an integrative trust layer to communicate a system's abilities, benevolence and integrity. As such, explainability should respect the individual and situational needs of users, and should therefore be responsive. In conclusion, the results demonstrate that “learning from life” requires extensive interdisciplinary collaboration with neighboring research fields. This novel perspective on trustworthiness aligns existing research areas. It delves deeper into the conceptual “how”, dives into the intricacies and showcases (missing) interconnectedness in the state of research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call