Abstract

This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call