Abstract
Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks-it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people's judgements of the acceptability of open and secret AI use, as well as people's expectations of their own and others' use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared. Our results also suggest that people believe others will use AI communication tools more than they would themselves and that people do not expect others' use to align with their expectations of what is acceptable. While much attention has been focused on transparency measures, our results suggest that self-other differences are a central factor for understanding people's attitudes and expectations for AI-mediated communication.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: British journal of psychology (London, England : 1953)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.