A psychological platform for GenAI and human co-piloting in education
GenAI (Generative Artificial Intelligence) will have a growing role within formal education. What should that role be? How do we treat GenAIs as an opportunity to enhance and reenergise teaching and learning? This position paper suggests that answers to these questions should start with our foundational psychological theories about what students need to function and develop well. This position article outlines how psychological needs theory, focusing on students' basic psychological needs for competence and relatedness might be a path forward. Teacher behavior supporting these psychological needs (i.e., involvement and structure), which have established relationships with learning outcomes, are used as a base for assessing the potential roles of human and AI instructors. A balanced approach that draws on the strengths of each instructor is suggested as a possible way forward for research and practice in this area. Co-piloting the educational ship forward could herald a brighter future for students across educational levels and contexts.
- Research Article
- 10.65106/apubs.2025.2763
- Nov 28, 2025
- ASCILITE Publications
The rapid integration of generative artificial intelligence (GenAI) into higher education has sparked debates about the future role of teachers (Chan & Tsi, 2024), including in providing feedback information to students. While GenAI offers unprecedented accessibility and immediacy, this presentation argues that teachers' expertise remains irreplaceable in productive feedback – i.e., processes in which students make sense of information about their performance and use it to improve the quality of their work or learning strategies (Henderson et al., 2019, p. 1402). Drawing on a large-scale, cross-institutional survey involving 6,960 Australian university students (Henderson et al., 2025), this Pecha Kucha highlights students' perceptions of GenAI versus teacher feedback. The quantitative analysis revealed that nearly half of them (49.7%) reported using GenAI for feedback. However, they rated teacher feedback as more helpful and significantly more trustworthy. While 83.9% found GenAI feedback helpful, only 60.1% considered it trustworthy, compared to 90.5% who trusted teacher feedback. This trust gap may reflect the inconsistent quality identified in GenAI's feedback comments (Venter et al., 2024). The thematic analysis of 5,736 open-ended responses from students who used GenAI for feedback yielded 8,498 coded instances, revealing four interrelated characteristics in which teacher feedback was perceived as outperforming GenAI. Contextualisation and Relevance: Teacher feedback was perceived as more sensitive to specific assignment contexts (95.2% of 669 instances rated GenAI as less contextualised than teacher feedback) and more relevant to learning objectives (84.6% of 123 instances rated GenAI as less relevant). This contextual awareness enables teachers to identify what matters within disciplinary and course-specific frameworks. Reliability and Accuracy: Students perceived teacher feedback as significantly more reliable and trustworthy (95.4% of 1143 instances), reflecting teachers' ability to provide more trustworthy and accurate guidance without the hallucinations and factual inaccuracies that can appear on GenAI outputs. Relational Significance: Teachers offered more personal, connected feedback experiences (93.8% of 471 instances), providing the interpersonal recognition essential for productive learning relationships. This relational dimension cannot be replicated by GenAI’s algorithmic responses. Expertise: Students recognised teachers as more authoritative sources (88.2% of 119 instances), valuing their disciplinary knowledge and pedagogical understanding of student development trajectories. Students' evaluation of feedback is fundamentally shaped by perceptions of source credibility (Bearman et al., 2024), which may explain why students perceive teacher feedback as more trustworthy than GenAI's. Research demonstrates this selective engagement: uptake of content-focused GenAI feedback was considerably lower than form-focused feedback(Ziqi et al., 2024), suggesting students recognise GenAI's limitations for substantive guidance requiring disciplinary expertise. This translates into learning outcomes, with students not only perceiving instructor feedback as more useful but also demonstrating significantly greater lab score improvements than those receiving GenAI feedback (Er et al., 2025). GenAI may create opportunities for educators to focus on what they do best: providing expert, contextualised, and relationally-grounded feedback within authentic learning relationships. This potentially positions teacher expertise as increasingly valuable, with educators prioritising higher-level pedagogical responsibilities, such as developmental guidance, facilitating critical thinking, and disciplinary enculturation, while GenAI supports lower-level feedback processes, like grammar correction and initial draft review. Students appear to already recognise this distinction, trusting teachers for more substantive, transformative feedback while appreciating GenAI's supplementary role for immediate, accessible guidance.
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
1
- 10.1111/jcal.70117
- Sep 1, 2025
- Journal of Computer Assisted Learning
ABSTRACTBackgroundWith the rapid advancement of technology, the integration of Generative Artificial Intelligence (GAI) in education has gained considerable attention. Many studies have examined GAI's impact on learning outcomes, yet their conclusions are inconsistent, highlighting the need for a comprehensive review to clarify its overall effects and identify influential factors.ObjectivesThis study aims to conduct a meta‐analysis of the effects of GAI on student learning outcomes across cognitive, competency and affective dimensions. Additionally, it seeks to explore how various moderating factors, including subject discipline, instructional duration, knowledge type, prior knowledge and tool type, influence GAI's effectiveness.MethodsA meta‐analysis was performed on 34 experimental and quasi‐experimental studies published internationally. Effect sizes were calculated for overall learning outcomes and categorised by dimension. Further analysis was conducted to assess the influence of moderating variables on the impact of GAI.ResultsThe meta‐analysis indicates that Generative Artificial Intelligence has a significant positive impact on overall learning outcomes, with a combined effect size of 0.68 (p < 0.001). The impact is particularly pronounced in the cognitive dimension (g = 0.795) and the competency dimension (g = 0.711), while its effect on the affective dimension (g = 0.507) is moderate but still significant. The analysis of moderating variables reveals that the effectiveness of GAI is influenced by discipline type but is not significantly affected by instructional period, knowledge type, prior knowledge level, or tool type. Specifically, GAI exhibits the highest positive effects in mathematics, science and humanities, whereas its impact is relatively lower yet still significant in computer science and medical/nursing education. Additionally, GAI's effectiveness does not significantly differ across various instructional periods, different knowledge types, learners with varying prior knowledge levels, or different AI tool versions.ConclusionsTo optimise GAI's use in education, the study suggests aligning GAI with specific subject needs, adapting tools for different student levels, integrating GAI with traditional teaching and establishing monitoring mechanisms. These strategies aim to maximise GAI's positive impact on learning efficiency and quality across educational settings.
- Research Article
1
- 10.1108/tg-08-2025-0240
- Dec 4, 2025
- Transforming Government: People, Process and Policy
Purpose This study aims to critically examine the socio-technical, economic and governance challenges emerging at the intersection of Generative artificial intelligence (AI) and Urban AI. By foregrounding the metaphor of “the moon and the ghetto” (Nelson, 1977, 2011), the issue invites contributions that interrogate the gap between technological capability and institutional justice. The purpose is to foster a multidisciplinary dialogue–spanning applied economics, public policy, AI ethics and urban governance – that can inform trustworthy, inclusive and democratically grounded AI practices. Contributors are encouraged to explore not just what GenAI can do, but for whom, how and with what consequences. Design/methodology/approach This study draws upon interdisciplinary literature from public policy, innovation studies, digital governance and urban sociology to frame the emerging governance challenges of Generative AI and Urban AI. It builds a conceptual foundation by synthesizing insights from comparative city case studies, innovation systems theory and normative policy frameworks. The approach is interpretive and exploratory, aiming to situate AI technologies within broader institutional, geopolitical and socio-economic contexts. The study invites contributions that adopt empirical, theoretical or practice-based methodologies addressing the governance of GenAI in cities and regions. Findings This study identifies a critical gap between the rapid technological advancements in Generative AI and the institutional readiness of public governance systems – particularly in urban contexts. It finds that current policy frameworks often prioritize efficiency and innovationism over democratic legitimacy, civic trust and inclusive design. Drawing on comparative global city experiences, it highlights the risk of reinforcing power asymmetries without robust accountability mechanisms. The analysis suggests that trustworthy AI is not a purely technical attribute but a political and institutional achievement, requiring participatory governance architectures and innovation systems grounded in public value and civic engagement. Research limitations/implications As an editorial introduction, this study does not present original empirical data but synthesizes key theoretical frameworks, case studies and policy debates to guide future research. Its analytical scope is conceptual and comparative, offering a foundation for submissions that further investigate Generative and Urban AI through empirical, normative and practice-based lenses. The limitations lie in its broad coverage and reliance on secondary sources. Nonetheless, it provides an agenda-setting contribution by highlighting the urgent need for interdisciplinary research into how AI reshapes public governance, institutional legitimacy and urban democratic futures. Practical implications This editorial offers a structured framework for policymakers, urban planners, technologists and public administrators to critically assess the governance of Generative and Urban AI systems. By highlighting international case studies and conceptual tools – such as public algorithmic infrastructures, civic trust frameworks and anticipatory governance – the article underscores the importance of institutional design, regulatory foresight and civic engagement. It invites practitioners to shift from techno-solutionist approaches toward inclusive, democratic and place-based AI governance. The reflections aim to support the development of trustworthy AI policies that are grounded in legitimacy, accountability and societal needs, particularly in urban and regional contexts. Social implications The editorial underscores that Generative and Urban AI systems are not socially neutral but carry significant implications for equity, representation and democratic legitimacy. These technologies risk reinforcing existing social hierarchies and systemic biases if not governed inclusively. This study calls for reimagining trust not as a technical feature but as a relational, contested dynamic between institutions and citizens. It encourages submissions that examine how AI reshapes the urban social contract, affects marginalized communities and challenges existing civic infrastructures. The goal is to promote AI governance frameworks that are pluralistic, just and reflective of diverse societal values and lived experiences. Originality/value This editorial offers a timely and conceptually grounded intervention into the emerging field of Urban AI and Generative AI governance. By framing the challenges through Richard R. Nelson’s metaphor of The Moon and the Ghetto, this study foregrounds the gap between technical capabilities and enduring societal injustices. The contribution lies in its interdisciplinary synthesis – bridging innovation systems, AI ethics, public policy and urban governance. It introduces a critical framework for assessing “trustworthy AI” not as a technical goal but as a democratic achievement and encourages research that is policy-relevant, equity-oriented and attuned to the institutional realities of AI in cities.
- Research Article
3
- 10.1177/07356331251329185
- Apr 11, 2025
- Journal of Educational Computing Research
Generative artificial intelligence (GenAI) has significant potential for educational innovation, although its impact on students’ learning outcomes remains controversial. This study aimed to examine the impact of GenAI on the learning outcomes of K-12 and higher education students, and explore the moderating factors influencing this impact. A meta-analysis of 49 articles showed that the mean effect sizes of GenAI on students’ learning achievement and learning motivation were 0.857 and 0.803, respectively, indicating a positive impact of GenAI on education. However, this effect varied according to moderators, including education level, subject classification, GenAI interface, GenAI development, interaction approaches, and experimentation time, which enhanced the impact of GenAI on education. Specifically, GenAI had a greater impact on the academic performance of higher education students, and students interacted more effectively with GenAI using text than with mixed media, such as images or audio. Although GenAI has a novel effect on students’ learning motivation, the effect size decreases over time. These findings provide empirical support for the beneficial effects of GenAI on education and offer insights for optimizing its use in teaching practices.
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
60
- 10.1080/01443610500364004
- Jan 1, 2006
- Journal of Obstetrics and Gynaecology
SummaryAlthough some previous studies have suggested formal maternal education as the most potent tool for reducing the mortality ratio in Nigeria, other studies found that the depressed Nigerian economy since 1986 has marginalised the benefits of education with the result that educated women stopped making use of existing health facilities because they could not afford the cost of health services. This study was carried out to determine the current influence of formal maternal education and other factors on the choice of place of delivery by pregnant women in Enugu, south-eastern Nigeria. It was a pre-tested interviewer-administered questionnaire study of women who delivered within 3 months before the date of data collection in the study area. In an increasing order of level of care, the outcome variable (place where the last delivery took place) was categorised into seven, with home deliveries representing the lowest category and private hospitals run by specialist obstetricians as the highest category. These were further sub-categorised into non-institutional deliveries and institutional deliveries. Maternal educational level was the main predictor variable. Other predictor variables were sociodemographic factors. Data analysis was by means of descriptive and inferential statistics including means, frequencies and χ2-tests at the 95% confidence (CI) level. Out of a total of 1,450 women to whom the questionnaires were administered, 1,095 women responded (a response rate of 75.5%). A total of 579 (52.9%) of the respondents delivered outside health institutions, while the remaining 516 (47.1%) delivered within health institutions. Regarding the educational levels of the respondents, 301 (27.5%) had no formal education; 410 (37.4%) had primary education; 148 (13.5%) secondary education and 236 (21.5%) post-secondary education. There was a significant positive correlation between the educational levels of the respondents and their husbands (r = 0.86, p = 0.000). With respect to occupational categories of the respondents, 88 (8.0%) of them belonged to occupational class I, 158 (14.4%) to occupational class II, 107 (9.8%) to occupational class III, 14 (1.3%) to occupational class IV and 728 to occupational class V. There was a significant positive correlation between the respondents' and their husbands' occupational levels (r = 0.89, p = 0.000). There were statistically significant associations between choice of institutional or non-institutional deliveries and respondents' educational level as well as place of residence (urban/rural), religion, tribe, marital status, occupational level, husband's occupational and educational levels, age and parity (p ≤ 0.05 for each variable). Further analysis of only the respondents who delivered within health institutions showed that there was a significant positive correlation between their educational levels and the level of care where they delivered (r = 0.45, p = 0.000). Significantly more of them with post-secondary education lived in the urban than in the rural areas, were Christians and were married to husbands of higher educational and economic levels. It is concluded that formal education is still a significant predictor of whether women deliver within or outside health institutions in Enugu, south- eastern Nigeria. Efforts at reducing maternal mortality ratio in Nigeria must increase the adult female literacy rate.
- Research Article
- 10.3390/educsci16010015
- Dec 23, 2025
- Education Sciences
This research-to-practice study examines how Generative Artificial Intelligence (GenAI) can be integrated into live case studies to enhance experiential learning in higher education. It explores GenAI’s potential as an agent to learn with scaffolding reflection and engagement and addresses gaps in existing applications that often focus narrowly on content generation. To explore GenAI’s agentive potential, the methodology illustrates this approach in a UK postgraduate operations management module. Students engaged in a live case study of a local ethnic restaurant to refine its business model and operations. The data sources used to examine students’ results included module materials, outputs, and feedback surveys. Thematic analysis was employed to assess how GenAI facilitated experiential learning. The findings suggest that GenAI integration facilitated exploration, reflection, conceptualisation, and experimentation. Students reported that the activity was engaging and relevant, facilitating critical decision-making and understanding of operations management. However, the outcomes varied according to GenAI literacy and student participation. Although GenAI-enriched learning is beneficial, human agency and contextual knowledge remain crucial. Overall, this study integrates GenAI as a cognitive partner throughout Kolb’s ELC. This study offers a transferable framework for active learning, illustrating how technology can enhance critical and reflective learning in authentic educational contexts. However, limitations include uneven student participation and engagement, resource constraints, overreliance on artificial intelligence outputs, differentiated impact on learning outcomes, and a single-case report, which must be addressed before the framework can be scaled up. Future research should test this through multi-case studies while developing GenAI literacy, measuring GenAI impact, and implementing ethical practices in the field.
- Research Article
6
- 10.1038/s41598-025-08697-6
- Jul 5, 2025
- Scientific Reports
This study investigates the influence of generative artificial intelligence (GAI) on university students’ learning outcomes, employing a technology-mediated learning perspective. We developed and empirically tested an integrated model, grounded in interaction theory and technology-mediated learning theory, to examine the relationships between GAI interaction quality, GAI output quality, and learning outcomes. The model incorporates motivational factors (learning motivation, academic self-efficacy, and creative self-efficacy) as mediators and creative thinking as a moderator. Data from 323 Chinese university students, collected through a two-wave longitudinal survey, revealed that both GAI interaction quality and output quality positively influenced learning motivation and creative self-efficacy. Learning motivation significantly mediated the relationship between GAI output quality and learning outcomes. Furthermore, creative thinking moderated several pathways within the model, with some variations observed across the two time points. These findings provide theoretical and practical insights into the effective integration of GAI tools in higher education, highlighting the importance of both interaction and output quality in optimizing student learning experiences.
- Research Article
3
- 10.1111/bjet.70018
- Sep 17, 2025
- British Journal of Educational Technology
The integration of generative artificial intelligence (GAI) in education has shown the potential to improve learning outcomes, yet its impact on self‐regulated learning (SRL) in second language (L2) writing remains underexplored. This mixed‐methods study investigated the effects of a GAI chatbot tool on task strategy diversity, metacognitive awareness and writing performance among 40 undergraduate students in Eastern China over an 8‐week intervention. Participants were randomly assigned to an experimental group ( n = 20) using the GAI chatbot platform Tongyi.ai or a control group ( n = 20) relying on traditional resources. Data were collected using the Metacognitive Awareness Inventory (MAI), Strategy Inventory for Language Learning (SILL), writing performance assessments, participant interaction logs, reflective journals and semi‐structured interviews. Quantitative analysis revealed that the experimental group showed greater improvements in task strategy diversity, metacognitive awareness and writing performance than the control group. Qualitative analysis further indicated that GAI tools could facilitate task planning, promote adaptive strategy use and deepen metacognitive reflection. Despite these benefits, participants expressed concerns about the potential of over‐reliance on GAI and the accuracy of its generated content. The present study highlights the potential of GAI to enhance SRL in L2 writing by fostering adaptive task strategies and promoting metacognitive development, offering valuable implications for integrating GAI into L2 writing instruction. Practitioner notes What is already known about this topic Generative AI (GAI) tools have shown the potential to enhance various aspects of education, including personalized learning and feedback provision. Self‐regulated learning (SRL) is crucial for students' academic success, particularly in second language writing. Technology has been found to support the development of writing strategies and metacognitive skills. What this paper adds This study provides novel empirical evidence on how GAI tools influence undergraduate students' task strategies and metacognitive awareness in self‐regulated learning, specifically in L2 writing contexts. The research demonstrates that students using GAI tools developed more diverse and adaptive task strategies compared with those using traditional resources. The study reveals that GAI tool usage led to increased metacognitive awareness among students, enhancing their ability to plan, monitor and evaluate their writing processes. Implications for practice and policy Educators should consider integrating GAI tools into L2 writing instruction to support students' development of diverse task strategies and metacognitive skills. When implementing GAI in education, it is crucial to balance technology assistance with fostering students' independent thinking and creativity. Future research should explore the long‐term effects of GAI on self‐regulated learning and investigate its impact across different student populations and educational contexts. Educational institutions should develop guidelines for the ethical use of GAI tools in academic settings, addressing concerns about academic integrity and data privacy.
- Research Article
4
- 10.1111/jcal.70004
- Feb 8, 2025
- Journal of Computer Assisted Learning
ABSTRACTBackgroundThere are various challenges to teachers' use of generative artificial intelligence (GenAI) for professional learning. Although GenAI is expected to play a transformative role in teachers' learning, its impact on them remains subtle.ObjectivesGuided by community of practice, this paper examines the integration of GenAI into an online professional learning community (OPLC) to facilitate knowledge co‐construction among GenAI, novice teachers and experienced teachers.MethodsWe used a mixed‐methods approach that included topic modelling and sentiment analysis on the quantitative side and content analysis for the qualitative data.ResultsWe identified the top three latent themes in the OPLC's discourse—(1) generating instructional material, (2) assessment, and (3) pedagogy—and six distinct teacher‐GenAI interaction profiles. For novice teachers, these included ‘engaged AI explorers’, ‘selective satisfiers’ and ‘silent strategists’; and among experienced teachers, we discerned ‘careful critics’, ‘reflective realists’ and ‘cautious contemplators’. Novice teachers exhibited technological adaptivity, while experienced ones engaged reflectively with content and focused more on students, and GenAI proved effective at providing instructional materials.ConclusionsThe findings demonstrate how GenAI can contribute to knowledge co‐construction, as a facilitator of rather than a replacement for human interaction.
- Research Article
7
- 10.14742/ajet.9540
- Oct 18, 2024
- Australasian Journal of Educational Technology
Generative artificial intelligence (GenAI) impacts higher education assessment and learning outcomes, which are closely related and intertwined. Literature suggests that educators and researchers have many varied concerns regarding student assessment in the higher education GenAI context, such as how to assess students’ learning and the new (refocused) learning outcomes that emerged in GenAI-facilitated learning environments. To provide evidence-based insights into and answers to these concerns, we conducted a scoping review by collating literature in relevant research areas. Following a five-stage scoping review framework, we collaboratively collected and coded 34 studies. The three assessment approaches identified in the review were traditional assessment, innovative and refocused assessment and GenAI-incorporated assessment. The new, refocused learning outcomes identified were career-driven competencies and lifelong learning skills. The review also revealed that most research designs were qualitatively oriented (e.g., with exploratory design, descriptive research, ethnographic research and phenomenological research). This study proposes a holistic diagram showing the current research status and trends. It suggests five future research directions: innovative assessment designs, collaborations among assessment approaches, new learning outcomes, relationships between assessment approaches and learning outcomes, and quantitative or mixed research studies. Implications for practice or policy: Traditional assessment methods in higher education do not operate effectively in the GenAI era. Innovative and refocused assessment and GenAI-incorporated assessment are promising strategies to assess student learning. Career-driven competencies and lifelong learning skills are new focused learning outcomes evolved from the use of GenAI. More quantitative and mixed research studies should be conducted to provide additional empirical evidence on the impact of GenAI on student assessment and learning outcomes.
- Research Article
9
- 10.1186/s12909-024-06592-8
- Dec 28, 2024
- BMC Medical Education
Generative Artificial Intelligence (AI), characterized by its ability to generate diverse forms of content including text, images, video and audio, has revolutionized many fields, including medical education. Generative AI leverages machine learning to create diverse content, enabling personalized learning, enhancing resource accessibility, and facilitating interactive case studies. This narrative review explores the integration of generative artificial intelligence (AI) into orthopedic education and training, highlighting its potential, current challenges, and future trajectory. A review of recent literature was conducted to evaluate the current applications, identify potential benefits, and outline limitations of integrating generative AI in orthopedic education. Key findings indicate that generative AI holds substantial promise in enhancing orthopedic training through its various applications such as providing real-time explanations, adaptive learning materials tailored to individual student’s specific needs, and immersive virtual simulations. However, despite its potential, the integration of generative AI into orthopedic education faces significant issues such as accuracy, bias, inconsistent outputs, ethical and regulatory concerns and the critical need for human oversight. Although generative AI models such as ChatGPT and others have shown impressive capabilities, their current performance on orthopedic exams remains suboptimal, highlighting the need for further development to match the complexity of clinical reasoning and knowledge application. Future research should focus on addressing these challenges through ongoing research, optimizing generative AI models for medical content, exploring best practices for ethical AI usage, curriculum integration and evaluating the long-term impact of these technologies on learning outcomes. By expanding AI’s knowledge base, refining its ability to interpret clinical images, and ensuring reliable, unbiased outputs, generative AI holds the potential to revolutionize orthopedic education. This work aims to provides a framework for incorporating generative AI into orthopedic curricula to create a more effective, engaging, and adaptive learning environment for future orthopedic practitioners.
- Research Article
- 10.59075/ijss.v3i3.1897
- Aug 1, 2025
- Indus Journal of Social Sciences
This study explored higher education students’ perspectives on the integration of generative artificial intelligence (GenAI) tools within the context of Education 4.0. Based on the best practices on academic transformation using qualitative interview studies focused on 26 undergraduate and postgraduate course students of the University of Sargodha, the study explores the current revolutionized role of GenAI in augmenting academic practice, improving their learning efficiencies, and disrupting the past learning paradigm. Results of thematic analysis identified five themes on the patterns of use of GenAI, academic enhancement and efficiency, critical engagement and thinking, learning outcomes, changes in skills, and ethical and collaborative aspects. The results show that, even though students heavily rely on GenAI applications like ChatGPT, Copilot, and DALL•E to complete various assignments, conduct research, and be creative, their experience depends on the level of active interaction and moral sensitivity. GenAI has a beneficial influence on conceptual clarity, time management, and academic results, but excessive use can hinder the development of critical analysis and learning. The overall conclusion of the study is that responsible and reflective behavior of GenAI can facilitate transformative learning. However, the necessary support must be based on clear policies, digital literacy education, and education-oriented pedagogical strategies delivered by the institutions in the vicinity of Education.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.