Artificial intelligence (AI) is penetrating higher medical education; however, its adoption remains low. A PRISMA-S search of the Web of Science database from 2020 to 2024, utilizing the search terms “artificial intelligence,” “medicine,” “education,” and “ethics,” reveals this trend. Four key areas of AI application in medical education are examined for their potential benefits: Educational support (such as personalized distance education), radiology (diagnostics), virtual reality (VR) (visualization and simulations), and generative text engines (GenText), such as ChatGPT (from the production of notes to syllabus design). However, significant ethical risks accompany AI adoption, and specific concerns are linked to each of these four areas. While AI is recognized as an important support tool in medical education, its slow integration hampers learning and diminishes student motivation, as evidenced by the challenges in implementing VR. In radiology, data-intensive training is hindered by poor connectivity, particularly affecting learners in developing countries. Ethical risks, such as bias in datasets (whether intentional or unintentional), need to be highlighted within educational programs. Students must be informed of the possible motivation behind the introduction of social and political bias in datasets, as well as the profit motive. Finally, the ethical risks accompanying the use of GenText are discussed, ranging from student reliance on instant text generation for assignments, which can hinder the development of critical thinking skills, to the potential danger of relying on AI-generated learning and treatment plans without sufficient human moderation.
Read full abstract