Despite advances in educational technology, the specific ways in which Generative Artificial Intelligence (GAI) and Large Language Models cater to learners’ nuanced cognitive and emotional needs are not fully understood. This mini-review methodically describes GAI’s practical implementations and limitations in meeting these needs. It included journal and conference papers from 2019 to 2024, focusing on empirical studies that employ GAI tools in educational contexts while addressing their practical utility and ethical considerations. The selection criteria excluded non-English studies, non-empirical research, and works published before 2019. From the dataset obtained from Scopus and Web of Science as of June 18, 2024, four significant studies were reviewed. These studies involved tools like ChatGPT and emphasized their effectiveness in boosting student engagement and emotional regulation through interactive learning environments with instant feedback. Nonetheless, the review reveals substantial deficiencies in GAI’s capacity to promote critical thinking and maintain response accuracy, potentially leading to learner confusion. Moreover, the ability of these tools to tailor learning experiences and offer emotional support remains limited, often not satisfying individual learner requirements. The findings from the included studies suggest limited generalizability beyond specific GAI versions, with studies being cross-sectional and involving small participant pools. Practical implications underscore the need to develop teaching strategies leveraging GAI to enhance critical thinking. There is also a need to improve the accuracy of GAI tools’ responses. Lastly, deep analysis of intervention approval is needed in cases where GAI does not meet acceptable error margins to mitigate potential negative impacts on learning experiences.
Read full abstract