Medical education in the age of artificial intelligence

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Background Artificial intelligence (AI) marks an inflection point in medical education systems built on scarcity of resources. These designs privilege standardisation and recall‐heavy examinations over reasoning and adaptive expertise, defined as the capacity to apply knowledge flexibly in uncertain clinical contexts, producing learners who memorise content but struggle with ambiguity, integration across domains and decision‐making under pressure. Objectives To outline a conceptual roadmap for integrating AI into medical education that strengthens adaptive expertise, productive struggle and assessment integrity rather than eroding them. Methods Conceptual analysis using educational, assessment and cognitive science frameworks to contrast scarcity‐era logics with emerging AI capabilities and synthesise illustrative use cases. Results We describe how AI can scaffold knowledge acquisition and inquiry; support authentic practice via virtual patients and educator‐created, AI‐enabled teaching tools and reshape assessment through blueprint‐aligned items and predictive learning analytics. We highlight AI's double‐edged nature: risks of undermining integrity, promoting cognitive deskilling and bypassing productive struggle, defined as purposeful, scaffolded difficulty that feels effortful yet achievable and that strengthens long‐term learning. We propose enabling conditions: trust, transparency, structured difficulty, and deliberate cognitive redistribution, defined as intentional reallocation of cognitive work between humans and AI tools, which offloads routine lower‐yield tasks to machines to preserve and advance human judgement, values, relationships and professional identity formation. Conclusions AI will either accelerate superficial shortcuts or amplify humane, expert practice, depending on how pedagogy, assessment and culture are redesigned. Intentional alignment can reclaim time and cognitive space for the uniquely human work at the heart of education.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.37762/jgmds.11-4.625
Transforming Medical and Dental Curriculum in the era of Artificial Intelligence (AI)
  • Sep 30, 2024
  • Journal of Gandhara Medical and Dental Science
  • Brekhna Jamil

The dawn of artificial intelligence (AI) signifies a pivotal shift in medical and dental education. Integrating AI into the curriculum modernizes learning and equips future healthcare professionals with crucial tools for the 21st century. The COVID-19 pandemic revealed the limitations of conventional educational models, necessitating rapid adaptation to remote and online learning environments. This disruption expedited the transition to digital platforms, laying the foundation for further integration of technology, including AI, into medical education. What began as an emergency response has now become a permanent feature of the educational landscape, evolving from static textbooks to dynamic digital platforms that offer greater accessibility, inclusivity, and personalization of learning experiences.1 In the AI era, it is insufficient to merely digitize the curriculum; a comprehensive transformation is essential. The digital curriculum opens new avenues for interactive learning environments, simulation-based practices, and adaptive learning algorithms that respond to the individual needs of students. AI-driven tools such as virtual patient simulations, diagnostic decision-making platforms, and predictive analytics have the potential to revolutionize how medical students learn, practice, and apply their knowledge in clinical settings.2 These innovations allow for an enhanced learning experience where students can interact with realistic patient cases and make informed decisions, fostering a deeper understanding of clinical practice. One of the most promising applications of AI in medical education is its role as an educational partner. AI-powered platforms can function as personalized tutors, providing real-time feedback, adjusting learning modules based on student performance, and even predicting areas where additional support may be required.3 Adaptive learning systems can analyze the learner’s pace and comprehension, offering tailored resources to bridge knowledge gaps. This personalized approach to education ensures that no student is left behind, addressing one of the longstanding challenges of traditional, one-size-fits-all curricula. Additionally, AI can enhance clinical reasoning through simulation and data-driven case scenarios. By analyzing patterns in patient data, AI algorithms can help medical students gain deeper insights into complex clinical decision-making processes. This data-driven approach can significantly improve learners’ ability to diagnose and plan treatments, thereby improving clinical outcomes. While AI and digital tools offer substantial benefits, the role of educators remains essential in this new educational paradigm. Rather than replacing teachers, AI will augment their roles, allowing them to focus on mentorship, critical thinking, and the ethical dimensions of healthcare.4 Educators will need to reimagine their roles, becoming facilitators of learning who guide students in interpreting and applying AI-generated data in clinical settings. As AI takes on administrative tasks such as grading, educators can dedicate more time to meaningful interactions with students.5 However, this shift toward AI-driven curricula also requires significant investment in faculty development. Educators must be trained in the use of AI tools and possess a thorough understanding of their applications to ensure that AI is used responsibly and effectively in shaping future healthcare professionals. As AI becomes more integrated into medical education, addressing the ethical challenges associated with this technology becomes crucial. While AI-driven tools hold great promise, they must be designed and deployed with an acute awareness of biases, data privacy concerns, and the risk of over-reliance on algorithms in clinical decision-making.6 The digital curriculum must provide students with technical skills and a strong ethical foundation for AI use in healthcare. Students must be trained to critically evaluate AI outputs, understand their limitations, and ensure that human judgment remains central to patient care. Transforming medical curricula in the AI era is not without challenges. Digital divides, access to technology, and the initial cost of AI-driven platforms may pose barriers to widespread adoption. Institutions must ensure equitable access to resources for all students, regardless of their geographic or socioeconomic backgrounds. Moreover, regulatory bodies such as the Higher Education Commission (HEC) and the Pakistan Medical and Dental Council (PMDC) must revise standards to accommodate these technological advancements. In conclusion, the transformation of medical and dental curricula into a digital, AI-enhanced model represents not only a modernization of education but also a fundamental shift in preparing future healthcare professionals. By embracing AI as an educational partner, medical institutions can create personalized, data-driven learning environments that equip students with the skills and knowledge needed to thrive in an increasingly complex healthcare landscape. The integration of AI into the curriculum offers an opportunity to empower the next generation of doctors, enabling them to navigate future challenges with confidence and competence. Now is the time for this transformation, and it is a journey that we must embark on collectively to ensure the future of education, healthcare, and patient care.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ejmp.2021.05.008
Focus issue: Artificial intelligence in medical physics.
  • Mar 1, 2021
  • Physica Medica
  • F Zanca + 11 more

Focus issue: Artificial intelligence in medical physics.

  • Research Article
  • Cite Count Icon 34
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Research Article
  • Cite Count Icon 1
  • 10.1097/acm.0000000000003294
Keck School of Medicine of the University of Southern California.
  • Aug 21, 2020
  • Academic medicine : journal of the Association of American Medical Colleges
  • Ron Ben-Ari + 2 more

Keck School of Medicine of the University of Southern California.

  • Research Article
  • Cite Count Icon 2
  • 10.47992/ijmts.2581.6012.0357
Challenges in Implementing AI Technology Smart Farming in Agricultural Sector – A Literature Review
  • Jun 30, 2024
  • International Journal of Management, Technology, and Social Sciences
  • Anusha S Rai A + 1 more

Background/Purpose: The agriculture sector is the backbone of every nation which contributes to the global economy. The implementation of technology in agriculture has brought revolutionary development in its outcome. Due to this, a drastic improvement in the global economy from the agricultural sector is expected. Moreover, the implementation of artificial intelligence (AI) improves the productivity of farmers giving solutions to various challenges faced by the farmers. The various AI tools that are developed for the agriculture sector include precision farming, predictive analytics, automated machinery, smart irrigation systems, crop and soil monitoring, supply chain optimization, weather forecasting, and livestock management. Adopting AI in agriculture faces several challenges despite its long-term benefits. The high upfront costs to be invested in implementing AI technology make it difficult for small-scale and developing farmers to invest in AI. Implementing the above technology needs technical skills, fast internet connectivity, and costlier equipment. Due to the lack of the above-mentioned requirements, the AI technologies that are meant for agriculture do not reach the farmers. This results in the wastage of resources for AI without the outcome. Considering the above issues an appropriate simplified model is proposed that facilitates the adaptation of the AI technology by small and medium-scale farmers in their agriculture to improve the performance. Objective: The objective of this paper is to review the various journals related to the implementation of AI in Agriculture and to study the various issues related to its implementation. It also aims at identifying the research gap which will help to develop a model suitable for the end like small-scale and medium-scale farmers. Design/Methodology/Approach: A systematic literature review was conducted by gathering and examining relevant literature from international and national journals, conferences, databases, and other resources accessed via Google Scholar and various search engines. Findings/Result: The agriculture sector, crucial to every nation's economy, has seen revolutionary advancements through technology, especially AI. AI tools like precision farming, predictive analytics, and smart irrigation promise to enhance productivity and address various agricultural challenges. However, high implementation costs, resistance to new technologies, and lack of necessary infrastructure hinder widespread adoption among small-scale and developing farmers. To overcome these obstacles, a model is proposed to effectively support farmers in adopting AI technologies to boost agricultural performance. Originality/Value: The implementation of AI and ML tools in agriculture from diverse sources is done. This area needs study due to recent challenges faced by small and medium-scale farmers in the implementation of AI and ML tools in agriculture. The information acquired will help to create a new model by improving the outcomes of the existing scenario. Paper Type: Literature Review.

  • Research Article
  • Cite Count Icon 34
  • 10.14196/mjiri.32.130
Professionalism and its role in the formation of medical professional identity.
  • Sep 30, 2018
  • Medical Journal of the Islamic Republic of Iran
  • Mina Forouzadeh + 2 more

Background: The honorable medical profession is on the verge of being reduced to a business. Evidence suggests that professionalism is fading and today's doctors are faced with value-threatening problems and gradually begin to forget their main commitment as medical professionals. Many of the problems faced by doctors are rooted in non-professionalism. Mere education in the science and practice of medicine produces an inefficient medical workforce and leads to the formation of a distorted professional identity. In the past decade, educational innovations targeting the formation of desirable professional identities have been presented and are considered a vital part of medical education for the development of professionalism. The present study was conducted to examine the relationship between the formation of professional identity and professionalism. Professionalism education is essential in the formation of a desirable professional identity. Methods: This review article was done in 2015 through searching databases, such as PubMed, Elsevier, Google Scholar, Ovid, SID, and IranMedex, using keywords of professionalism and professional identity, and medical education. Among the 55 found articles, 30 were assessed and selected for review. Results: The formation of professional identity is a process with the following domains: professionalism, and development of a personal (psychosocial) and a cultural identity, which is derived from the unification of professional, personal, and ethical development. The main components required for the formation of a desirable identity are, therefore, rooted in the dimensions of professionalism and professional development. The need for teaching professionalism has a reciprocal relationship with the formation of professional identity. Conclusion: There is a reciprocal relationship between formation of a desirable professional identity and development and strengthening of professionalism. Modern medical education should be designed to develop professional identity, and professionalism acts as an essential part of its curricula throughout the entire course of a doctor’s education, with the aim of acquiring a desirable professional identity

  • Research Article
  • Cite Count Icon 2
  • 10.62049/jkncu.v5i1.177
Effectiveness of Artificial Intelligence Tools in Teaching and Learning in Higher Education Institutions in Kenya
  • Dec 29, 2024
  • Journal of the Kenya National Commission for UNESCO
  • Audrey Matere

The purpose of this study was to evaluate the effectiveness of Artificial Intelligence (AI) tools in teaching and learning in higher education institutions in Kenya, specifically focusing on Intelligent Tutoring Systems (ITS), Adaptive Learning Platforms, Virtual Learning Assistants (VLAs), Automated Grading Systems and Learning Analytics Systems (LAS), their accessibility use and its effectiveness in teaching and learning. The study employed a mixed-methods research design, combining both quantitative and qualitative approaches, to gather comprehensive data from faculty members, students, and administrators across 15 selected public and private universities and technical colleges in Kenya. The findings indicated that the accessibility of AI tools in institutions of higher learning in Kenya is significantly limited. A large majority of respondents expressed that AI tools are not readily available, highlighting disparities in access across different departments and projects within institutions. In terms of usage, the integration of AI tools into teaching and learning practices is still in its early stages in most institutions and where they are available they are not always well-integrated with existing curricula, leading to limited and uneven adoption across different disciplines. Despite these challenges, those who have begun using AI tools have reported benefits such as personalized learning, more efficient assessment processes, and enhanced feedback mechanisms, indicating that AI has the potential to transform educational practices if more effectively utilized. Findings further established a significant correlation between AI tools and effective teaching and learning in institutions of higher learning in Kenya (r = .781; p = .000). The study noted that while AI can significantly improve the educational experience, its current impact is constrained by several factors. Faculty members' unfamiliarity with AI, the lack of comprehensive training, and the inadequate integration of AI tools into the curriculum are major barriers to their effective use. However, where AI has been successfully implemented, it has contributed to better learning outcomes, higher student engagement, and more personalized feedback. The study recommended that institutions must invest in infrastructure, ongoing professional development, and curriculum integration, ensuring that AI tools are both accessible and effectively used to enhance teaching and learning outcomes.

  • Research Article
  • 10.52711/0974-360x.2025.00358
The Role of Artificial intelligence in Medical training, with its Applicability, Efficiency, Potential, and Challenges among Preclinical Students
  • Jun 12, 2025
  • Research Journal of Pharmacy and Technology
  • Tin Moe Nwe + 7 more

Introduction: Artificial intelligence (AI) is revolutionizing medical education by enhancing learning experiences, improving knowledge retention, and providing personalized guidance. Through interactive simulations, virtual tutors, and adaptive learning platforms, AI-powered solutions help students in preclinical education grasp difficult ideas. Thus this study aims to examine the impact of AI on preclinical medical education students' academic performance, engagement, and readiness for clinical training among preclinical medical students, examining its applicability, efficacy, potential, and limitations. Which is done by outlining the objectives and evaluating the effectiveness, opportunities, and challenges of integrating artificial intelligence (AI) tools in medical education among Year 1 and Year 2 MBBS students in a private medical university, Malaysia. Methodology: The study involved 300 sample population, including 152 Year 1 and 148 Year 2 students. A cross-sectional questionnaire-based study was conducted, with a sample size of 184 with a 95% confidence level. Data were collected through online surveys and analysis using Microsoft Excel and IBM SPSS version 23.0.Result: All 184 preclinical students used AI tools in their medical education, mainly relying on ChatGPT. About 84.2% are familiar with AI in this context. The effectiveness of AI in improving learning was rated from 1 to 3, with most students scoring AI as 4 or 5 in problem-solving, decision-making, critical thinking, and inspiring new ideas, indicating a high perception of its effectiveness. Many believe AI supports traditional teaching. However, concerns exist about over-reliance on technology (83.2%) and loss of critical thinking skills (77.7%). Also, 42.9% rated their worries about AI's impact on clinical decision-making skills as a 3. Conclusion: Most preclinical students know about AI in medical education and believe it helps improve learning. AI assists students in solving problems, making decisions, encouraging critical thinking, and generating new ideas. However, concerns felt about much dependence on technology and weaken critical thinking skills in medical education. Students believe that AI will not entirely harm clinical decision-making skills. In summary, AI offers both advantages and disadvantages in medical education.

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.ejmp.2021.03.015
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
  • Mar 1, 2021
  • Physica Medica
  • Nico Buls + 4 more

Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.

  • Research Article
  • Cite Count Icon 1
  • 10.12688/mep.20554.1
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study
  • Oct 23, 2024
  • MedEdPublish
  • David Mukunya + 18 more

Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • Research Article
  • 10.12688/mep.20554.3
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study
  • Apr 28, 2025
  • MedEdPublish
  • David Mukunya + 18 more

Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • Research Article
  • 10.12688/mep.20554.2
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study.
  • Jan 23, 2025
  • MedEdPublish (2016)
  • David Mukunya + 18 more

ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • Research Article
  • 10.7897/2277-4343.164127
USE OF ARTIFICIAL INTELLIGENCE FOR ACADEMIC PURPOSES AMONG UNDERGRADUATE STUDENTS: A PILOT STUDY
  • Aug 31, 2025
  • International Journal of Research in Ayurveda and Pharmacy
  • Rachna Hada + 3 more

Purpose: Artificial Intelligence (AI) is rapidly transforming various sectors, including education, and significantly impacting students' academic experiences from school to college. This study explores how BAMS undergraduate students perceive the use of AI and AI-based tools for educational purposes and assess their prakriti. Method: This cross-sectional observational study was conducted among undergraduate BAMS students of Pt. Khushilal Sharma, Govt. Ayurveda College and Institute campus, Bhopal, Madhya Pradesh State, Central India, 200 undergraduate students aged 18 to 30. The collected data were analyzed using the software IBM SPSS Statistics Version 29.0.2.0(20). Data analysis used descriptive statistics (frequency, percentages, mean, standard deviation, etc.) as appropriate and inferential statistics (reliability, validity, and regression). Result: Among 200 undergraduate students, 33% were male and 67% were female. The mean age of students was 23.3±1.92(SD) years. 40% students belonged to Vatapittaja, 34% to Pittakaphaja, 22% to Kaphavataja, and 4% to Vatapitttakaphaja prakriti. The finding indicated that 78.5% of students were familiar with AI concepts. ChatGPT, Grammarly, and Google Assistant emerged as the most frequently used AI tools. Only 6.5% of students use AI tools daily for academic purposes. Most of the students were satisfied with using AI tools for study purposes. Conclusion: The surveyed students had little formal or informal experience with AI tools and limited awareness of AI's potential applications in education. However, they generally held positive opinions about ChatGPT, Grammarly, and AI and were hopeful about AI's future role in medical education and health care.

  • Research Article
  • 10.2478/ctra-2025-0007
Creative Integration of Artificial Intelligence (AI) Tools in Effective Teaching
  • Jan 1, 2025
  • Creativity. Theories – Research - Applications
  • Mia Filipov + 2 more

The expansion of artificial intelligence (AI) tools has brought about new opportunities and challenges for teachers and students. These tools have the potential to reshape teaching and stimulate both students’ and teachers’ creativity. In 21st-century education, creativity emerges as a key skill that encompasses problem-solving, innovation, adaptability, critical thinking, and cognitive development. AI tools also provide personalized assistance and feedback as well as customized study materials. Moreover, they have proven beneficial in cultivating critical thinking and enhancing students’ research skills. Instead of questioning teachers’ preparedness for AI technologies, the focus should be on discovering ways to effectively and creatively integrate these tools into the classroom. This paper explores the possibilities of implementing generative AI tools to promote students’ creativity, thus enhancing the overall quality of teaching. In the Croatian educational system, similarly to Poland, school pedagogues should encourage positive changes within the school culture. Therefore, this paper also underscores the role of school pedagogues in bridging the gap between teachers and AI tools as an educational innovation. School pedagogues should be instrumental in supporting teachers during the integration of AI tools into their teaching by showcasing practical applications and emphasizing potential benefits for student engagement and learning outcomes. In this capacity, school pedagogues bear the responsibility of fostering a reflective and critical approach towards AI tools, advocating creative yet responsible use of technology in the classroom.

  • Research Article
  • 10.34190/ecie.19.1.2468
Exploring the potential of AI to increase productivity in small marketing teams
  • Sep 20, 2024
  • European Conference on Innovation and Entrepreneurship
  • Aniko Szenftner + 2 more

Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.