Generative AI in simulation debriefings: an exploratory study using the Team-FIRST framework and qualitative feedback from simulation experts and learners.
Effective debriefings in simulation-based education require accurate observation of team interactions, yet facilitators face challenges due to cognitive load, observer bias, and the complexity of team dynamics. Generative artificial intelligence (AI) tools offer a potential means to support this process by analyzing verbal communication and providing structured feedback. This study explored how AI tools can contribute to teamwork observation and debriefing in immersive medical simulations. We conducted a qualitative, exploratory study using thematic analysis of simulation participants' and debriefers' experiences with AI-generated teamwork reports. Forty-one participants (anesthesia nurses, residents, and attendings) participated in immersive scenarios at the University Hospital Zurich simulation center. Verbal interactions were transcribed with AI-assisted speech recognition and analyzed using two large language model-based systems (Isaac and ChatGPT-4o) guided by a prompt based on the Team-FIRST framework. Structured reports were generated for each scenario and reviewed by four simulation experts. Semi-structured interviews captured learners' perspectives on being observed by AI tools. A total of 26 AI-generated reports and 27 learner interviews were analyzed. Experts valued the detailed transcripts and illustrative quotes, which supported structured feedback and captured observations that might otherwise be missed. Limitations included inaccuracies in categorization, misattribution of speakers, overly generalized interpretations, and the absence of contextual or nonverbal information. Learners expressed openness and optimism about AI's potential benefits: efficiency, objectivity, and enhanced perception, while also raising concerns about transparency, data protection, interpretation errors, and risks of overreliance. Both groups emphasized the necessity of human oversight. Generative AI tools can complement simulation debriefings by structuring communication data and highlighting teamwork patterns, supporting reflective practice. Current limitations highlight the need for multimodal approaches, refined prompting strategies, and integration with expert facilitation to ensure AI functions as a support tool rather than a replacement in simulation-based education. BASEC ID: Req-2024-01642.
- Research Article
37
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
- 10.34190/ecie.19.1.2468
- Sep 20, 2024
- European Conference on Innovation and Entrepreneurship
Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.
- Research Article
30
- 10.1016/j.ejmp.2021.03.015
- Mar 1, 2021
- Physica Medica
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
- Research Article
1
- 10.6087/kcse.352
- Feb 5, 2025
- Science Editing
Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
- Research Article
- 10.55041/ijsrem30862
- Apr 17, 2024
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Human beings are endowed with a natural curiosity and creativity, which motivate them to learn new things from their interactions with the world. Human learning has involved exploration and experimentation, which have allowed humans to discover new facts and principles, and to invent new artifacts and systems. Human learning has also affected human evolution, both genetically and culturally, as humans have adjusted to different situations and demands in their environments. However, in the current world, human learning is largely facilitated by artificial intelligence (AI) tools, which are programs that can perform tasks that usually require human intelligence, such as comprehension, reasoning, problem-solving, and communication. AI tools can support humans in their learning endeavors, by giving them access to enormous amounts of information, and by delivering them customized and interactive assistance and feedback. AI tools can also amplify human creativity and innovation, by generating novel and diverse content, such as code, poems, essays, songs, and more. But what are the effects of this dependence on AI tools for human learning and evolution? Does it boost or diminish human curiosity and creativity? Does it enable or limit human autonomy and agency? Does it foster or hamper human diversity and collaboration? These are some of the questions that this topic will explore, by evaluating the pros and cons of using AI tools for human learning, and the ethical and social issues that arise from this phenomenon. [28] Today when we look around us we observe the advancement in technology has brought a lot of comfort to our lives in terms of traveling, education, or enjoying content virtually. [29] Talking about our basic requirements, technology has become so friendly that we can learn everything through E-Learning. Everyone only wondered about having an AI which will help in making our lives easy. The latest concept in terms of AI which is widely received and accepted by the people everywhere around the Globe is the Open AI that is Chat Gpt, Gemini, Copilot. All of these AI helps us in decision making or cutting our chase short for finding solutions for either lengthy solutions like writing a summary related to something or Questions which are easy to solve but difficult to look for solutions. About a quarter (27%) of Americans say they interact with artificial intelligence almost constantly or several times a day. Artificial intelligence (AI) is used in a variety of ways, including online product recommendations, facial recognition software and chatbots. One in six (17%) adults reported that they can often or always recognise when they are using AI, one in two (50%) adults reported that they can some of the time or occasionally recognise when they are using AI, one in three (33%) adults reported that they can hardly ever or never recognise when they are using AI. [26] In this project we are testing the dependence upon the recently emerged Open AI tools such as ChatGPT, Google Bard, Bing. Our motive is to find out whether people are using these powerful tools to help in their academics or other tasks only or do they take advice from these tools in their financial planning as well.
- Research Article
11
- 10.9734/ajrcos/2024/v17i7491
- Jul 30, 2024
- Asian Journal of Research in Computer Science
With the increasing use of Generative Artificial Intelligence (AI) tools like ChatGPT and Bard, universities face challenges in maintaining academic integrity. This research investigates the impact of these tools on learning outcomes (factual knowledge, comprehension, critical thinking) in selected universities of Ghana's Upper East Region during the 2023-2024 academic year. The study specifically analyzes changes in student comprehension and academic integrity concerns when using Generative AI for content generation, research assistance, and summarizing complex topics. A mixed-methods approach was employed, combining qualitative data from interviews and open-ended questions with quantitative analysis of survey data and academic records. The research focuses on three institutions: C. K. Tedam University of Technology and Applied Sciences, Bolgatanga Technical University, and Regentropfen University College. A purposive sampling technique recruited 150 participants (50 from each university) who had used Generative AI tools. Key findings show that 72% of students reported improved understanding of course material through Generative AI use, yet 75% cited academic integrity as a primary concern. Quantitative analysis revealed a weak to moderate positive correlation (r = 0.45) between AI tool usage and improved grades, with variations depending on the specific AI tasks performed. Qualitative data highlighted concerns about overreliance on AI and its impact on critical thinking skills. This research contributes to the ongoing debate on AI's role in education by providing valuable insights for educators and policymakers worldwide. The findings suggest that while AI tools can enhance comprehension, ethical considerations and potential drawbacks related to critical thinking require careful attention. The study concludes with recommendations for integrating AI literacy programs, developing ethical guidelines, and implementing advanced plagiarism detection systems to harness the benefits of Generative AI while mitigating risks to academic integrity. Although specific to the Upper East Region of Ghana, these insights may be applicable to other educational systems with similar characteristics.
- Research Article
1
- 10.1007/s44163-025-00316-7
- May 30, 2025
- Discover Artificial Intelligence
The use of Generative Artificial Intelligence (AI) tools in international commercial arbitration reveals a complex intersection with the potential risk of confidential data breaches. Adopting a doctrinal research approach, this research article analyses the legal and regulatory framework applicable to ensure responsible and ethical uses of AI so as to protect confidentiality in international arbitration. This article argues that the use of AI in international arbitration has brought in a new age of efficiency and accuracy in international arbitration, but it also raises concerns on the protection of confidentiality as third-party owned AI tools and systems are prone to a potential risk of confidential data breaches and confidentiality violations on volumes of data stored together in AI tools. The application of the guidelines and principles on the use of AI in international arbitration as well as emerging regulations and laws on AI have varied approaches that are either discretionary or only play a guiding role on the protection of confidential information in international arbitration. Ultimately, this article recommends that it is imperative for the upcoming versions of institutional arbitration rules to enhance the confidentiality obligations in arbitration proceedings with a focus on the integration of AI tools. Alternatively, with the use of confidentiality orders, arbitration participants must ensure that appropriate safeguards are in place to ensure that confidentiality is a core consideration from the initial stages of deploying AI tools. Confidentiality by design could also be applied in Generative AIs used by law firms, arbitral tribunals or institutions.
- Research Article
- 10.1136/bmjopen-2025-099921
- Oct 15, 2025
- BMJ open
Systematic literature reviews (SLRs) are essential for synthesising research evidence and guiding informed decision-making. However, SLRs require significant resources and substantial efforts in terms of workload. The introduction of artificial intelligence (AI) tools can reduce this workload. This study aims to investigate the preferences in SLR screening, focusing on trade-offs related to tool attributes. A discrete choice experiment (DCE) was performed in which participants completed 13 or 14 choice tasks featuring AI tools with varying attributes. Data were collected via an online survey, where participants provided background on their education and experience. Professionals who have published SLRs registered on Pubmed, or who were affiliated with a recent Health Economics and Outcomes Research conference were included as participants. The use of a hypothetical AI tool in SLRs with different attributes was considered by the participants. Key attributes for AI tools were identified through a literature review and expert consultations. These attributes included the AI tool's role in screening, required user proficiency, sensitivity, workload reduction and the investment needed for training. The participants' adoption of the AI tool, that is, the likelihood of preferring the AI tool in the choice experiment, considering different configurations of attribute levels, as captured through the DCE choice tasks. Statistical analysis was performed using conditional multinomial logit. An additional analysis was performed by including the demographic characteristics (such as education, experience with SLR publication and familiarity with AI) as interaction variables. The study received responses from 187 participants with diverse experience in performing SLRs and AI use. The familiarity with AI was generally low, with 55.6% of participants being (very) unfamiliar with AI. In contrast, intermediate proficiency in AI tools is positively associated with adoption (p=0.030). Similarly, workload reduction is also strongly linked to adoption (p<0.001). Interestingly, if expert proficiency is needed for the AI, authors with more scientific experience in their profession are less likely to adopt AI (p=0.009). However, more experience specifically with SLR publications increases AI adoption likelihood (p=0.001). The findings suggest that workload reduction is not the only consideration for SLR reviewers when using AI tools. The key to AI adoption in SLRs is creating reliable, workload-reducing tools that assist rather than replace human reviewers, with moderate proficiency requirements and high sensitivity.
- Research Article
- 10.2478/ctra-2025-0007
- Jan 1, 2025
- Creativity. Theories – Research - Applications
The expansion of artificial intelligence (AI) tools has brought about new opportunities and challenges for teachers and students. These tools have the potential to reshape teaching and stimulate both students’ and teachers’ creativity. In 21st-century education, creativity emerges as a key skill that encompasses problem-solving, innovation, adaptability, critical thinking, and cognitive development. AI tools also provide personalized assistance and feedback as well as customized study materials. Moreover, they have proven beneficial in cultivating critical thinking and enhancing students’ research skills. Instead of questioning teachers’ preparedness for AI technologies, the focus should be on discovering ways to effectively and creatively integrate these tools into the classroom. This paper explores the possibilities of implementing generative AI tools to promote students’ creativity, thus enhancing the overall quality of teaching. In the Croatian educational system, similarly to Poland, school pedagogues should encourage positive changes within the school culture. Therefore, this paper also underscores the role of school pedagogues in bridging the gap between teachers and AI tools as an educational innovation. School pedagogues should be instrumental in supporting teachers during the integration of AI tools into their teaching by showcasing practical applications and emphasizing potential benefits for student engagement and learning outcomes. In this capacity, school pedagogues bear the responsibility of fostering a reflective and critical approach towards AI tools, advocating creative yet responsible use of technology in the classroom.
- Research Article
- 10.20853/39-3-6272
- Jan 1, 2025
- South African Journal of Higher Education
Generative artificial intelligence (AI) tools have sparked debates in the education sector prompting researchers to explore their desirability and potential in education. This paper acknowledges generative AI’s potential to support the delivery of teaching, learning and research in the higher education emphasising its ability to improve student writing quality as well as academic productivity, success rate, and independence. However, responsible use of these AI tools to support research is also crucial. Furthermore, the challenges associate with AI tool use, especially accessibility and usage in the African context, are recognised. For instance, ethical challenges relating to (mis)use of AI because no or inadequate policy regulations have been implemented. In addition, there are technical and structural challenges relating to connectivity, power outages, device access and technical know-how. Therefore, this paper aims to identify the opportunities and challenges associated with using AI tools to support research in African Higher Education classrooms. For the study, a qualitative systematic literature review was applied to two articles using thematic analysis from a final selection of 29 articles. Findings indicated that generative AI tools could enhance student writing skills and increase productivity. Additionally, they could lead to research autonomy, improved writing proficiency, quality, and academic throughput. Shortcomings included AI misuse, knowledge deficiencies, and infrastructural challenges preventing AI access. Additionally, inadequate regulations relating to using generative AI tools for learning and teaching were a further challenge. It is essential to address the ethical concerns, invest in skills development and promote equitable digital access, especially in Africa, where this is limited. In addition, the capability approach revealed how the digital divide limited the adoption of generative AI tools in Africa.
- Research Article
6
- 10.1108/lhtn-08-2024-0131
- Sep 17, 2024
- Library Hi Tech News
PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.
- Conference Article
2
- 10.54941/ahfe1004957
- Jan 1, 2024
In the dynamic field of programming education, integrating artificial intelligence (AI) tools has started to play a significant role in enhancing learning experiences. This paper presents a case study conducted during a foundational programming course for first-year students in higher education, where students were encouraged to utilize generative artificial intelligence programming copilot extensions in their programming IDE and browser-based generative AI tools as supportive AI tools. The primary objective was to observe the impact of AI on the learning curve and the overall educational experience.Key findings suggest that the introduction of AI tools significantly altered the learning experience for students. Many who initially struggled with grasping elementary programming concepts found that AI support made understanding basic programming concepts much easier, enhancing their confidence and skills. This was particularly evident in the reduced levels of anxiety typically associated with early programming learning, as the AI copilot provided a non-judgmental, always-available source for clarifying doubts, including queries that students might hesitate to ask in a traditional classroom setting.Notably, some students leveraged the AI to generate similar exercise problems, reinforcing their understanding and skills. The AI's capability to address basic queries also freed up the instructor's time, allowing for more personalized student guidance in more advanced problems. This shift in the instructional dynamic further contributed to a learning environment where students felt more comfortable engaging with complex topics, thereby reducing the psychological barriers often linked with early-stage programming education.The course's structure, enriched by AI, enabled students to delve into more complex programming constructs earlier than traditional curricula would allow. For instance, students were tasked with simulating basic e-commerce operations, such as user registration, product browsing, and cart functionalities. These practical challenges naturally introduced advanced concepts like external data storage, unit testing, and user interface design, which are typically reserved for more advanced courses. With the help of generative AI programming copilot tools, students at any programming skill level were able to develop nearly functional complex structures. Interestingly, even when their projects were not fully functional, students remained motivated. Instead of feeling discouraged by these imperfect outcomes, they showed resilience and a keen interest in understanding and improving their code. This reaction is a significant shift from traditional learning settings, where unfinished or flawed projects often lead to increased anxiety or a drop in motivation.Furthermore, the AI's proactive suggestions inspired students to explore beyond the curriculum. Advanced learners delved into databases, cryptography libraries in Python, and even more advanced user interface design, ensuring that they remained engaged and challenged. This elementary course, enhanced by generative AI tools, also inspired students to learn other programming languages since they now learned that individual learning is more available with the aid of generative AI.In conclusion, the integration of AI in programming education offers a promising avenue for enhancing both the learning experience and outcomes. This case study underscores the potential of AI to revolutionize traditional teaching methodologies, fostering a more dynamic, responsive, and inclusive learning environment.This paper handles the results, possibilities and challenges of AI empowered education in programming. It also gives practical examples as well as future research perspectives.
- Research Article
- 10.14444/8778
- Jul 14, 2025
- International journal of spine surgery
Artificial Intelligence: The Prevalent Coauthor Among Early-Career Surgeons.
- Research Article
- 10.1177/03064190251410482
- Jan 5, 2026
- International Journal of Mechanical Engineering Education
The integration of generative Artificial Intelligence (AI) tools in education has gained significant attention from various researchers worldwide. The main objective in this study is to examine the impact of using generative AI tools in engineering education. A comprehensive survey was administered to engineering students across various universities in Jordan. The survey is designed to evaluate students’ awareness, usage patterns, perceived benefits, and challenges associated with utilizing AI tools in engineering education in Jordan. The Unified Theory of Acceptance and Use of Technology (UTAUT) is employed as a theoretical framework to investigate the factors influencing engineering students’ usage of AI tools in their academic activities. The study revealed a strong inclination among engineering students toward the use of generative AI. A significant majority of 89% utilized AI tools to enrich their understanding of academic material, while 57.5% expressed a preference for AI-assisted learning over traditional methods such as textbook reading. Notably, the analysis identified a statistically significant difference in usage frequency based on the language of instruction (p-value=0.00). Findings revealed that students studying in English language show higher levels of AI adoption compared to students studying in Arabic. These findings highlight evolving learning behaviors and the growing role of AI in shaping educational experiences.
- Research Article
- 10.2196/76130
- Jan 27, 2026
- Journal of Medical Internet Research
BackgroundLiving evidence (LE) synthesis refers to the method of continuously updating systematic evidence reviews to incorporate new evidence. It has emerged to address the limitations of the traditional systematic review process, particularly the absence of or delays in publication updates. The emergence of COVID-19 accelerated the progress in the field of LE synthesis, and currently, the applications of artificial intelligence (AI) in LE synthesis are expanding rapidly. However, in which phases of LE synthesis should AI be used remains an unanswered question.ObjectiveThis study aims to (1) document the phases of LE synthesis where AI is used and (2) investigate whether AI improves the efficiency, accuracy, or utility of LE synthesis.MethodsWe searched Web of Science, PubMed, the Cochrane Library, Epistemonikos, the Campbell Library, IEEE Xplore, medRxiv, COVID-19 Evidence Network to support Decision-making, and McMaster Health Forum. We used Covidence to facilitate the monthly screening and extraction processes to maintain the LE synthesis process. Studies that used or developed AI or semiautomated tools in the phases of LE synthesis were included.ResultsA total of 24 studies were included, including 17 on LE syntheses, with 4 involving tool development, and 7 on living meta-analyses, with 3 involving tool development. First, a total of 34 AI or semiautomated tools were involved, comprising 12 AI tools and 22 semiautomated tools. The most frequently used AI or semiautomated tools were machine learning classifiers (n=5) and the Living Interactive Evidence synthesis platform (n=3). Second, 20 AI or semiautomated tools were used for the data extraction or collection and risk of bias assessment phase, and only 1 AI tool was used for the publication update phase. Third, 3 studies demonstrated the improvement in efficiency achieved based on time, workload, and conflict rate metrics. Nine studies applied AI or semiautomated tools in LE synthesis, obtaining a mean recall rate of 96.24%, and 6 studies achieved a mean F1-score of 92.17%. Additionally, 8 studies reported precision values ranging from 0.2% to 100%.ConclusionsAI and semiautomated tools primarily facilitate data extraction or collection and risk of bias assessment. The use of AI or semiautomated tools in LE synthesis improves efficiency, leading to high accuracy, recall, and F1-scores, while precision varies across tools.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.