AbstractGenerative AI systems like chatbots are increasingly being introduced into learning, teaching and assessment scenarios at universities. While previous research suggests that users treat chatbots like humans, computer systems are still often perceived as less trustworthy, potentially impairing their usefulness in learning contexts. How are processes of social cognition applied to chatbots compared to humans? Our study focuses on the role of politeness in communication. We hypothesise that polite communication improves the perception of trustworthiness of chatbots. University students read a feedback dialogue between a student and a feedback provider. In a 2 × 2 between‐subjects experimental design, we manipulated the feedback's author (chatbot vs. human teacher) and the feedback formulation (polite vs. direct). Participants evaluated the feedback giver on measures of epistemic trustworthiness (expertise, benevolence and integrity) and on two basic dimensions of social cognition, namely agency and communion. Results showed that a polite feedback giver was rated higher on benevolence and communion, whereas a direct feedback giver was rated higher on agency. Unexpectedly, the chatbot was rated lower on benevolence than the human. This suggests that social cognition does apply to interactions with chatbots, with caveats. We discuss the findings regarding the design of feedback chatbots and their use in higher education. Practitioner notesWhat is already known about this topic Technology users tend to treat computer systems like humans, but computers are usually trusted less. Polite communication, that is mitigation of face threats is expected to enhance the evaluation of a chatbot as trustworthy. The research is relevant for the use and acceptance of chatbots as feedback providers in educational contexts. What this paper adds We test the assumption that polite language reduces the gap in epistemic trustworthiness between chatbots and human teachers as feedback givers. We describe an empirical study with 284 university student participants who report their perceptions of a feedback dialogue between a student and either a human teacher or a chatbot. We analyse the impact of feedback source as well as politeness on trustworthiness perceptions and social cognition. Implications for practice and/or policy The study confirms that users are receptive to politeness in communication. They treat chatbots in a similar manner to human interaction partners. The results highlight the significance of politeness of chatbots' language in learning contexts. Feedback chatbots need to be equipped with suitable linguistic strategies, such as politeness, for communicating in a socially appropriate manner at critical points in the instructional dialogue.
Read full abstract