Abstract
AbstractWith the continuous development of technological and educational innovation, learners nowadays can obtain a variety of supports from agents such as teachers, peers, education technologies, and recently, generative artificial intelligence such as ChatGPT. In particular, there has been a surge of academic interest in human‐AI collaboration and hybrid intelligence in learning. The concept of hybrid intelligence is still at a nascent stage, and how learners can benefit from a symbiotic relationship with various agents such as AI, human experts and intelligent learning systems is still unknown. The emerging concept of hybrid intelligence also lacks deep insights and understanding of the mechanisms and consequences of hybrid human‐AI learning based on strong empirical research. In order to address this gap, we conducted a randomised experimental study and compared learners' motivations, self‐regulated learning processes and learning performances on a writing task among different groups who had support from different agents, that is, ChatGPT (also referred to as the AI group), chat with a human expert, writing analytics tools, and no extra tool. A total of 117 university students were recruited, and their multi‐channel learning, performance and motivation data were collected and analysed. The results revealed that: (1) learners who received different learning support showed no difference in post‐task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self‐regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self‐regulated learning processes, ultimately leading to differentiated performance. What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”. In conclusion, understanding and leveraging the respective strengths and weaknesses of different agents in learning is critical in the field of future hybrid intelligence. Practitioner notesWhat is already known about this topic Hybrid intelligence, combining human and machine intelligence, aims to augment human capabilities rather than replace them, creating opportunities for more effective lifelong learning and collaboration. Generative AI, such as ChatGPT, has shown potential in enhancing learning by providing immediate feedback, overcoming language barriers and facilitating personalised educational experiences. The effectiveness of AI in educational contexts varies, with some studies highlighting its benefits in improving academic performance and motivation, while others note limitations in its ability to replace human teachers entirely. What this paper adds We conducted a randomised experimental study in the lab setting and compared learners' motivations, self‐regulated learning processes and learning performances among different agent groups (AI, human expert and checklist tools). We found that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger metacognitive "laziness", which can potentially hinder their ability to self‐regulate and engage deeply in learning. We also found that ChatGPT can significantly improve short‐term task performance, but it may not boost intrinsic motivation and knowledge gain and transfer. Implications for practice and/or policy When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently. When using AI in teaching, teachers should think about which tasks are suitable for learners to complete with the assistance of AI, pay attention to stimulating learners' intrinsic motivations, and develop scaffolding to assist learners in active learning. Researcher should design multi‐task and cross‐context studies in the future to deepen our understanding of how learners could ethically and effectively learn, regulate, collaborate and evolve with AI.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.