A Systematic Scoping Review of Self‐Regulated Learning With AI in Language Education
ABSTRACT The rapid development of artificial intelligence (AI) technology has brought new opportunities for second/foreign language (L2) education while also raising concerns about the use of AI tools. This systematic scoping review provides a snapshot of the current state of research situated at the intersection of self‐regulated learning (SRL) and AI in L2 education. The review is restricted to empirical studies. It focuses on the current nature and extent of AI research in relation to SRL, the relationship between SRL and the use of AI, as well as a new understanding on the conceptualization of SRL in AI research. The review identified severe imbalances in the geographical distribution, language skills, and research designs that feature in extant AI‐related SRL research. Three key relationships between SRL and the use of AI were identified: (1) SRL as a learning outcome from the use of AI tools, (2) SRL as a guiding framework for AI‐related pedagogical designs, and (3) SRL as a strategic process when using AI tools. Furthermore, three themes related to L2 learners, self‐regulated behaviors, and AI‐mediated environments emerged. These themes suggest a need to view AI as a ‘capable peer’ and broaden SRL research to explore co‐regulated learning between learners and AI to capture the dynamic interactions through processes such as iterative prompting.
- Research Article
- 10.6087/kcse.352
- Feb 5, 2025
- Science Editing
Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
- Front Matter
10
- 10.1016/j.jval.2021.12.009
- Jan 31, 2022
- Value in Health
The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned
- Research Article
1
- 10.12688/mep.20554.1
- Oct 23, 2024
- MedEdPublish
Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.3
- Apr 28, 2025
- MedEdPublish
Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.2
- Jan 23, 2025
- MedEdPublish (2016)
ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.3390/educsci15040461
- Apr 8, 2025
- Education Sciences
This survey study aims to understand how college students use and perceive artificial intelligence (AI) tools in the United Arab Emirates (UAE). It reports students’ use, perceived motivations, and ethical concerns and how these variables are interrelated. Responses (n = 822) were collected from seven universities in five UAE emirates. The findings show widespread use of AI tools (79.6%), with various factors affecting students’ perceptions about AI tools. Students also raised concerns about the lack of guidance on using AI tools. Furthermore, mediation analyses revealed the underlining psychological mechanisms pertaining to AI tool adoption: perceived benefits fully mediated the relationship between AI knowledge and usefulness perceptions, peer pressure mediated the relationship between academic stress and AI adoption intent, and ethical concerns fully mediated the relationship between ethical perceptions and support for institutional AI regulations. The findings of this study provide implications for the opportunities and challenges posed by AI tools in higher education. This study is one of the first to provide empirical insights into UAE college students’ use of AI tools, examining mediation models to explore the complexity of their motivations, ethical concerns, and institutional guidance. Ultimately, this study offers empirical data to higher education institutions and policymakers on student perspectives of AI tools in the UAE.
- Research Article
- 10.36128/priw.vi56.896
- Jul 8, 2025
- LAW & SOCIAL BONDS
The use of artificial intelligence (AI) tools in higher education has become increasingly important because of the time and effort savings and the speed of information transfer. However, many ethical and legal challenges make their use in this field a complex issue. Problems such as bias and discrimination that arise from AI Tools require the establishment of a legal system capable of controlling their use in an optimal manner. However, the legal regulation of the use of AI Tools in higher education, especially in the fields of research and data analysis, does not reach the required level. Although many countries have begun to use these tools in higher education and scientific research, the legal framework is still not at the required level. This research attempts to explore the legal and ethical challenges of using AI in higher education and scientific research with the aim of focusing on the importance of developing a legal framework capable of promoting the use of AI Tools in the scientific and educational sectors. The paper highlights the most important relevant laws in technologically advanced countries in general to measure the extent to which they are reflected in reality.
- Research Article
1
- 10.1108/oth-10-2024-0066
- Jan 28, 2025
- On the Horizon: The International Journal of Learning Futures
PurposeThis study aims to evaluate students’ intention and actual use (AU) of artificial intelligence (AI) tools’ to discover how the power of AI influences learning and academic success.Design/methodology/approachThis paper used the unified theory of acceptance and use of technology (UTAUT) to develop a structural equation model (SEM) and used convenience sampling to measure 304 students’ five-point Likert scale responses. The model was tested with AMOS-24 and SPSS-25, and the study found that AI boosted students’ learning experiences and explain importance of AI skills and knowledge.FindingsPerformance expectancy (PE), effort expectancy (EE), social influence and facilitating condition directly and indirectly affect AU via intent to use (IU), while subjective norms determining the use of AI tools’ and have no substantial influence. Attitude (ATT) moderates PE and EE, although the data show that ATT has no substantial effect on EE.Originality/valueThese insights may help student to understand how AI tools’ benefit them and what factors affect their utilization. When correctly designed and executed, UTAUT provides an appropriate integrated theoretical framework for robust statistical analysis like SEM.
- Research Article
65
- 10.1111/nin.12556
- Apr 26, 2023
- Nursing Inquiry
Will ChatGPT undermine ethical values in nursing education, research, and practice?
- Research Article
- 10.47772/ijriss.2025.903sedu0537
- Jan 1, 2025
- International Journal of Research and Innovation in Social Science
The rapid adoption of artificial intelligence (AI) in higher education has changed the way students engage in self- regulated learning (SRL). Yet, little is known about the role of AI in promoting self-regulated learning experiences for international students in China who might face unique cultural and language challenges. To understand the impact of AI tool users’ effectiveness and efficiency in the process of self- regulated learning (SRL) during the planning, monitoring, and reflection stages, a cross-sectional study was conducted with 199 international students on the use of AI tools in self-regulated learning (SRL) on international students’ autonomous learning in Chinese universities. Descriptive statistics with correlations and linear regression analyzes were also used to understand the data. The Study’s results showed that AI platforms such as ChatGPT and Grammarly are used extensively, with AI-powered planning, monitoring and reflection significantly predicting perceived academic benefits and explaining 55.8% of the variance in outcomes. The participants further reported improved autonomy and efficiency, nevertheless they also cited challenges such as language barriers, cultural mismatches and the risk of over-reliance on AI tools. These findings highlight the dual role of AI both as a facilitator and potential barrier to SRL. The results advocate that AI tools should be integrated deliberately into learning with support systems that facilitate profound reflection and critical examination, rather than superficial reliance on the AI tools. Most important, the study suggested educational institutions to foster responsive AI systems that can assist diverse international student populations, with the help of AI literacy professional development in order to equip students with the skill to appropriately use these tools in appropriate and ethical manner.
- Research Article
- 10.61722/jssr.v3i6.6849
- Oct 27, 2025
- JOURNAL SAINS STUDENT RESEARCH
This qualitative study investigates how lecturers and EFL undergraduate students perceive the use of artificial intelligence (AI) tools in writing classes. The research focuses on two questions: What are the lecturers’ perceptions on the students’ use of AI tools in writing classes? and What are students’ perceptions on the use of AI tools in writing classes?. Data were collected through interviews with three lecturers and nine undergraduate students from the English Language Education Study Program at Sriwijaya University, selected using purposive sampling. The researcher acted as the primary instrument, while open Thematic analysis was applied to analyze the data. Findings show that lecturers expressed concerns about students’ over-reliance on AI, reduced comprehension, and declining critical thinking, though they acknowledged its benefits for grammar and structure. Students reported that AI supported brainstorming and improved clarity but sometimes misinterpreted topics or altered their writing voice. These results suggest that while AI can enhance writing instruction, its effective use requires clear guidance and critical reflection.
- Research Article
- 10.59490/dgo.2025.1060
- Jun 30, 2025
- Conference on Digital Government Research
This paper aims to answer three main questions regarding the use of artificial intelligence (AI) tools in the Colombian judiciary. First, what type of AI tools do judges and judicial staff in Colombia access and use? Second, how and for what purposes are these AI tools used? Third, do demographic factors (e.g., age, gender) influence how judges and judicial staff approach AI tools? This paper is based on three comprehensive surveys conducted in 2024. Two surveys conducted by the authors targeted participants in the course ”Artificial Intelligence for the Administration of Justice: Fundamentals, Applications, and Best Practices”, offered by the Universidad de los Andes and the Superior Council of the Judiciary (CSdJ). A total of 1,391 judicial staff members responded at the start of the course, and 824 responded at its conclusion. A third survey, conducted later by the CSdJ, gathered responses from 3,152 judicial personnel. Our analysis reveals that training significantly improved AI familiarity among judicial personnel—initially, 63% reported minimal knowledge, but after the 50-hour course, 85% claimed moderate to high familiarity. While approximately one-third of respondents initially used AI for work tasks, this increased to nearly half post-training. Over 80% of users accessed free AI versions, raising concerns about confidentiality as these platforms may share information with third parties. Judicial officials primarily employ generative AI for information searches and document writing, particularly for jurisprudence (59%), legislation (52%), and definitions (51%). This reliance on AI for information retrieval presents risks if outputs aren’t verified against reliable sources. Although age and gender disparities in AI familiarity exist, reported usage patterns show minimal demographic differences. These findings emphasize the importance of enhancing digital literacy among judicial professionals and inform our recommendations for developing appropriate regulations and guidelines governing AI systems in the justice sector.
- Research Article
- 10.37394/23209.2024.21.41
- Oct 9, 2024
- WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS
The research aims to study the impact of the use of artificial intelligence (AI) tools in higher education institutions (HEIs) on building professional competencies of future art specialists. The research employed quantitative and qualitative methods (in particular, modeling methods, pedagogical experiments, and survey of respondents to assess the impact of AI tools on building professional competencies). The author’s definition of the concept of “professional competencies of art specialists” is proposed. Targeted tools were selected and used for building components of professional competencies. For example, VocalAnalysis AI tools were used to form the perceptual component — for students majoring in Musical Art; Art Vision AI — for students majoring in Fine Arts; ChoreoVision AI — for students majoring in Choreography. The results of the study show that students rated their level of ability to use AI as higher than medium. The questionnaire designed to study the impact of the use of AI on building professional competencies of future specialists in art majors, demonstrated a high level of agreement between the assessment of the impact of the use of AI tools on the formation of various components of professional competencies. Further research can be aimed at the development and testing of an algorithm for objective expert evaluation of specific AI tools for the implementation of art projects by students of the specified art majors.
- Research Article
9
- 10.31703/gssr.2023(viii-ii).19
- Jun 30, 2023
- Global Social Sciences Review
This research paper presents an insightful investigation into the perceptions and ethical considerations of students regarding the use of Artificial Intelligence (AI) tools in academia, particularly focusing on the University of Limerick in Ireland. Herein, AI tools like OpenAI's ChatGPT have emerged as valuable assets in promoting interactive learning and enhancing student engagement. Thus, this research aimed to explore the privacy and ethical considerations students have regarding the use of AI tools in education. Using a quantitative methodological approach, the study solicited the attitudes, opinions, and patterns of students towards AI utilities. The study revealed intriguing perspectives on data privacy concerns associated with AI tools. Students from technology and science-focused schools displayed a higher degree of concern, suggesting their deeper understanding of potential privacy implications. Conversely, students from arts, humanities, and social sciences, and law politics & public administration displayed slightly lower levels of concern.
- Research Article
7
- 10.1016/j.system.2024.103505
- Oct 3, 2024
- System
Cognitive and sociocultural dynamics of self-regulated use of machine translation and generative AI tools in academic EFL writing
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.