Abstract
ABSTRACT This paper offers a novel perspective on trust in artificial intelligence (AI) systems, focusing on the transfer of user trust in AI creators to trust in AI systems. Using the agentic information systems (IS) framework, we investigate the role of AI alignment and steerability in trust transference. Through four randomized experiments, we probe three key alignment-related attributes of AI systems: creator-based steerability, user-based steerability, and autonomy. Results indicate that creator-based steerability amplifies trust transference from AI creator to AI system, while user-based steerability and autonomy diminish it. Our findings suggest that AI alignment efforts should consider the entity with which the AI goals and values should be aligned and highlight the need for research to theorize from a triadic view encompassing the user, the AI system, and its creator. Given the diversity in individual goals and values, we recommend that developers move beyond the prevailing “one-size-fits-all” alignment strategy. Our findings contribute to trust transference theory by highlighting the boundary conditions under which trust transference breaks down or holds in the emerging human-AI environment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.