Abstract
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact of displaying integrity, which is one of the factors that influence trust. The study analyzes how different integrity-based explanations provided by an AI agent affect a human’s appropriate level of trust in the agent. To explore this, we conducted a between-subject user study involving 160 participants who collaborated with an AI agent to estimate calories on a food plate, with the AI agent expressing its integrity in different ways through explanations. The preliminary results demonstrate that an AI agent that explicitly acknowledges honesty in its decision making process elicit higher subjective trust than those that are transparent about their decision-making process or fair about biases. These findings can aid in designing agent-based AI systems that foster appropriate trust from humans.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.