Abstract

Artificial intelligence (AI) agency plays an important role in shaping humans’ perceptions and evaluations of AI. This study seeks to conceptually differentiate AI agency from human agency and examine how AI’s agency manifested on source and language dimensions may be associated with humans’ perceptions of AI. A 2 (AI’s source autonomy: autonomous vs human-assisted) × 2 (AI’s language subjectivity: subjective vs objective) × 2 (topics: traveling vs reading) factorial design was adopted ( N = 376). The results showed autonomous AI was rated as more trustworthy, and AI using subjective language was rated as more trustworthy and likable. Autonomous AI using subjective language was rated as the most trustworthy, likable, and of the best quality. Participants’ AI literacy moderated the interaction effect of source autonomy and language subjectivity on human trust and chat quality evaluation. Results were discussed in terms of human–AI communication theories and the design and development of AI chatbots.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.