Inverted Maxillary Third Molar Impaction: Exploring Capabilities of Artificial Intelligence (AI) Versus Human Intelligence (HI) Expertise in Diagnosis and Treatment Planning

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Introduction: The third molar is frequently affected in the oral cavity, with rare cases of inverted impaction in the maxillary region. This rarity poses unique challenges in diagnosis, treatment planning, and surgical procedures, with potential complications like sinusitis or infection. Current literature highlights a divide between conservative and surgical management, lacking comprehensive guidelines and exploring the role of AI-assisted tools. This study addresses this gap by evaluating the diagnostic accuracy of AI tools, particularly ChatGPT, against human specialists in Oral and Maxillofacial Surgery. Considering the growing role of AI in medicine, this research aims to provide insights into the potential of AI in enhancing diagnosis and treatment planning for rare cases, emphasizing collaboration between AI systems and medical professionals. Objectives: • Evaluate the diagnostic accuracy of AI tools (ChatGPT) compared to human-generated (specialist OMFS) diagnoses in dental cases. • Assess the efficiency and reliability of AI-assisted treatment plans in contrast to those generated by dental professionals. • Compare the performance and features of paid and non-paid versions of the AI programs utilized. Materials and Methods: This study centered on the unique case of a 59-year-old woman at Thumbay Dental Hospital, presenting issues related to a faulty dental bridge and a history of managed hypertension. An orthopantomogram showed the inverted impacted maxillary third molar Figure 1. The patient exhibited no direct symptoms from this impaction. A cone computed tomography was performed for a detailed analysis of the patient for complete prosthetic rehabilitation and academic purposes Figure 2. All the available data, including the history, clinical examination, and radiographic findings, were provided to a specialist and AI tools (ChatGPT version 3 and ChatGPT version 4) to get a diagnosis and treatment plan for this unusual case of an impacted third molar. Data collection comprised clinical examinations, imaging, and AI outputs, focusing on the accuracy of diagnostic and treatment plans. The study also assessed AI’s adaptability, cultural sensitivity, and practicality in clinical settings, aiming to gauge AI tools’ potential in enhancing dental diagnostics and treatment planning alongside human expertise. AI tools, including ChatGPT and its advanced versions, were employed to generate and compare diagnostic assessments and treatment plans against those created by dental professionals. Results: In a rare dental case involving a 59-year-old woman with a faulty dental bridge and managed hypertension, specialists at Thumbay Dental Hospital identified functional issues and an inverted impacted maxillary third molar using orthopantomogram and Cone Beam Computed Tomography. Collaborating with oral and maxillofacial surgeons, a comprehensive treatment plan for complete oral rehabilitation was formulated, considering age, anatomical complexity, and medical history, offering two options for the impacted third molar. The AI-generated diagnosis and treatment plans from ChatGPT versions 3 and 4 were explored. ChatGPT-3 provided a detailed plan for bridge replacement, including a specialized segment for managing the impacted third molar. ChatGPT-4 crafted a comprehensive plan starting with an initial consultation, encompassing diagnostic procedures, discussions on bridge replacement options, preparation, fabrication, fitting, and post-procedure care. The plan addressed missing teeth and the impacted tooth, highlighting adaptability to individual needs. However, ChatGPT-4 emphasized its inability to provide medical diagnoses, stressing the importance of professional evaluation. In summary, the study compares human-generated and AI-generated diagnosis and treatment plans. The human-generated plan prioritizes collaboration and comprehensive care, while AI-generated plans from ChatGPT versions 3 and 4 demonstrate detailed and adaptable approaches. ChatGPT-4 underscores the need for professional evaluation. The research sheds light on the potential roles of human and AI expertise in dental diagnostics and treatment planning, emphasizing the importance of collaboration for optimal patient care. Conclusion: This study highlights the collaborative potential of AI and human intelligence in handling intricate dental cases, such as Inverted Maxillary Third Molar Impaction. While AI tools like ChatGPT showcase the ability to create detailed treatment plans, their incapacity to replicate nuanced clinical judgment underscores the vital role of human oversight, particularly in specialized fields like Oral and Maxillofacial Surgery. The results are consistent with existing research, emphasizing AI as a supplement to, rather than a substitute for, human expertise in healthcare. The ongoing integration of AI with human medical practice shows promise in improving diagnostic accuracy and treatment effectiveness in dental healthcare.

Similar Papers
  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ejmp.2021.05.008
Focus issue: Artificial intelligence in medical physics.
  • Mar 1, 2021
  • Physica Medica
  • F Zanca + 11 more

Focus issue: Artificial intelligence in medical physics.

  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.ejmp.2021.03.015
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
  • Mar 1, 2021
  • Physica Medica
  • Nico Buls + 4 more

Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.

  • Research Article
  • 10.1177/2993091x251369836
Ethics and Law: How Artificial Intelligence May Shape Cancer Research During Pregnancy
  • Aug 1, 2025
  • AI in Precision Oncology
  • Alma Linkeviciute + 2 more

Background: Historically, pregnant patients have not been included in clinical research, as the risk of damaging the healthy development of the fetus was considered too high. This practice is changing, and the applications of artificial intelligence (AI) to pregnancy research offer a wide range of solutions that are expected to help closing the knowledge gap regarding how best to treat pregnant patients with cancer. The AI tools currently available offer support in screening, diagnosing, treatment planning, and patient counselling. These AI tools, however, were not developed by clinicians or medical researchers; thus, close collaboration between clinicians and the developers of AI tools is urgently needed. Methods: Many of the questions that emerge when AI is used to better approach a pregnant patient with cancer are not technological. Instead, these questions concern value conflicts that cannot be solved by technological tools alone and where wider public debate and agreement are required. How should AI tools consider the developing fetus and its pregnant mother? Should both the pregnant person and the developing fetus be regarded as patients or research participants? What kind of relationship between the two should be incorporated into AI algorithms? This article provides an in-depth interdisciplinary analysis of the above questions and concludes with recommendations for the immediate future. Results: Key recommendations include (1) adhering to existing biomedical ethics principles of respect for patient autonomy, including the relational context; the balance of maternal and fetal beneficence; protection of the vulnerable; and reasonable resource allocation in the given circumstances; (2) when recruiting pregnant patients to research studies, focusing on building pregnant patients’ trust in clinical research and on enhancing pregnant patients’ knowledge so that they feel able to understand and adhere to clinical research requirements; (3) various AI tools can help health care professionals and researchers to plan clinical studies and to create patient- and clinician-directed educational and decision-making tools, while also making them more accessible. Conclusions: Wider cross-disciplinary debate is still needed in order to establish how AI systems and tools for cancer treatment during pregnancy and pregnancy research in general should regard pregnant patients and their developing fetuses, especially where moral status questions are concerned.

  • Abstract
  • Cite Count Icon 1
  • 10.1016/j.ijrobp.2022.07.401
Commissioning of an Artificial Intelligence (AI) Tool for Automated Head and Neck Intensity Modulated Radiation Therapy (IMRT) Treatment Planning
  • Oct 22, 2022
  • International Journal of Radiation Oncology*Biology*Physics
  • X Li + 12 more

Commissioning of an Artificial Intelligence (AI) Tool for Automated Head and Neck Intensity Modulated Radiation Therapy (IMRT) Treatment Planning

  • Research Article
  • 10.5455/mjhs.2026.01.018
Artificial Intelligence in Pathology: Bridging the Gap Between Technology and Diagnostics
  • Jan 1, 2026
  • Majmaah Journal of Health Sciences
  • Sneha Chavarkar

In recent years, there has been a considerable increase in the development and application of Artificial Intelligence (AI) tools in pathology. In the current era of precision medicine, computational pathology (CPATH) and AI tools will drastically transform pathology services. Despite tremendous success in the development of AI tools, there exists a wide gap between the research and its clinical application in pathology. This article explores how Artificial Intelligence (AI) revolutionizes pathology by bridging the gap between traditional diagnostic methods and advanced technological innovations. It seeks to highlight the potential of AI to enhance diagnostic accuracy, efficiency, and precision while addressing the challenges and opportunities for integrating AI into modern pathology practices. The author conducted a PubMed search for articles published between January 1990 and August 2024. Terms like “AI artificial intelligence,” “deep learning,” and “machine learning” were searched using MeSH (Medical Subject Headings). Currently, AI plays a supportive role in pathology. It can also help to prognosticate malignancies. Supportive data indicates that with the aid of AI, pathologists can reach diagnosis swiftly and with more accuracy. We concluded that by bridging the gap between traditional diagnostic practices and cutting-edge technology, AI will help pathologists contribute more efficiently to precision medicine. While challenges like data quality, regulatory compliance, and ethical concerns remain, the future of pathology depends on seamless collaboration between AI tools and human expertise.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.3389/fradi.2023.1112841
How should studies using AI be reported? lessons from a systematic review in cardiac MRI.
  • Jan 30, 2023
  • Frontiers in Radiology
  • Ahmed Maiter + 3 more

Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. Frontiers in Cardiovascular Medicine 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.

  • Research Article
  • 10.34190/ecie.19.1.2468
Exploring the potential of AI to increase productivity in small marketing teams
  • Sep 20, 2024
  • European Conference on Innovation and Entrepreneurship
  • Aniko Szenftner + 2 more

Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.

  • Research Article
  • 10.1136/bmjopen-2025-099921
The introduction and adoption of artificial intelligence in systematic literature reviews: a discrete choice experiment.
  • Oct 15, 2025
  • BMJ open
  • Seye Abogunrin + 6 more

Systematic literature reviews (SLRs) are essential for synthesising research evidence and guiding informed decision-making. However, SLRs require significant resources and substantial efforts in terms of workload. The introduction of artificial intelligence (AI) tools can reduce this workload. This study aims to investigate the preferences in SLR screening, focusing on trade-offs related to tool attributes. A discrete choice experiment (DCE) was performed in which participants completed 13 or 14 choice tasks featuring AI tools with varying attributes. Data were collected via an online survey, where participants provided background on their education and experience. Professionals who have published SLRs registered on Pubmed, or who were affiliated with a recent Health Economics and Outcomes Research conference were included as participants. The use of a hypothetical AI tool in SLRs with different attributes was considered by the participants. Key attributes for AI tools were identified through a literature review and expert consultations. These attributes included the AI tool's role in screening, required user proficiency, sensitivity, workload reduction and the investment needed for training. The participants' adoption of the AI tool, that is, the likelihood of preferring the AI tool in the choice experiment, considering different configurations of attribute levels, as captured through the DCE choice tasks. Statistical analysis was performed using conditional multinomial logit. An additional analysis was performed by including the demographic characteristics (such as education, experience with SLR publication and familiarity with AI) as interaction variables. The study received responses from 187 participants with diverse experience in performing SLRs and AI use. The familiarity with AI was generally low, with 55.6% of participants being (very) unfamiliar with AI. In contrast, intermediate proficiency in AI tools is positively associated with adoption (p=0.030). Similarly, workload reduction is also strongly linked to adoption (p<0.001). Interestingly, if expert proficiency is needed for the AI, authors with more scientific experience in their profession are less likely to adopt AI (p=0.009). However, more experience specifically with SLR publications increases AI adoption likelihood (p=0.001). The findings suggest that workload reduction is not the only consideration for SLR reviewers when using AI tools. The key to AI adoption in SLRs is creating reliable, workload-reducing tools that assist rather than replace human reviewers, with moderate proficiency requirements and high sensitivity.

  • Abstract
  • Cite Count Icon 1
  • 10.1016/j.ijrobp.2022.07.389
Prospective Clinical Integration of AI Based Treatment Planning Tool for Whole Breast Radiation Therapy (WBRT): A Single Institution's Three-Year Experience
  • Oct 22, 2022
  • International Journal of Radiation Oncology*Biology*Physics
  • D Yang + 9 more

Prospective Clinical Integration of AI Based Treatment Planning Tool for Whole Breast Radiation Therapy (WBRT): A Single Institution's Three-Year Experience

  • Research Article
  • Cite Count Icon 6
  • 10.1108/lhtn-08-2024-0131
Artificial intelligence (AI) tools for academic research
  • Sep 17, 2024
  • Library Hi Tech News
  • Adetoun A Oyelude

PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.

  • Research Article
  • 10.14444/8778
Artificial Intelligence: The Prevalent Coauthor Among Early-Career Surgeons.
  • Jul 14, 2025
  • International journal of spine surgery
  • Franziska C S Altorfer + 3 more

Artificial Intelligence: The Prevalent Coauthor Among Early-Career Surgeons.

  • Front Matter
  • Cite Count Icon 11
  • 10.1016/j.jval.2021.12.009
The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned
  • Jan 31, 2022
  • Value in Health
  • Danielle Whicher + 1 more

The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned

  • Research Article
  • 10.2196/76130
The Phases of Living Evidence Synthesis Using AI: Living Evidence Synthesis (Version 1)
  • Jan 27, 2026
  • Journal of Medical Internet Research
  • Xuping Song + 14 more

BackgroundLiving evidence (LE) synthesis refers to the method of continuously updating systematic evidence reviews to incorporate new evidence. It has emerged to address the limitations of the traditional systematic review process, particularly the absence of or delays in publication updates. The emergence of COVID-19 accelerated the progress in the field of LE synthesis, and currently, the applications of artificial intelligence (AI) in LE synthesis are expanding rapidly. However, in which phases of LE synthesis should AI be used remains an unanswered question.ObjectiveThis study aims to (1) document the phases of LE synthesis where AI is used and (2) investigate whether AI improves the efficiency, accuracy, or utility of LE synthesis.MethodsWe searched Web of Science, PubMed, the Cochrane Library, Epistemonikos, the Campbell Library, IEEE Xplore, medRxiv, COVID-19 Evidence Network to support Decision-making, and McMaster Health Forum. We used Covidence to facilitate the monthly screening and extraction processes to maintain the LE synthesis process. Studies that used or developed AI or semiautomated tools in the phases of LE synthesis were included.ResultsA total of 24 studies were included, including 17 on LE syntheses, with 4 involving tool development, and 7 on living meta-analyses, with 3 involving tool development. First, a total of 34 AI or semiautomated tools were involved, comprising 12 AI tools and 22 semiautomated tools. The most frequently used AI or semiautomated tools were machine learning classifiers (n=5) and the Living Interactive Evidence synthesis platform (n=3). Second, 20 AI or semiautomated tools were used for the data extraction or collection and risk of bias assessment phase, and only 1 AI tool was used for the publication update phase. Third, 3 studies demonstrated the improvement in efficiency achieved based on time, workload, and conflict rate metrics. Nine studies applied AI or semiautomated tools in LE synthesis, obtaining a mean recall rate of 96.24%, and 6 studies achieved a mean F1-score of 92.17%. Additionally, 8 studies reported precision values ranging from 0.2% to 100%.ConclusionsAI and semiautomated tools primarily facilitate data extraction or collection and risk of bias assessment. The use of AI or semiautomated tools in LE synthesis improves efficiency, leading to high accuracy, recall, and F1-scores, while precision varies across tools.

  • Research Article
  • Cite Count Icon 7
  • 10.1111/bjet.13562
Aligning and comparing values of ChatGPT and human as learning facilitators: A value‐sensitive design approach
  • Jan 15, 2025
  • British Journal of Educational Technology
  • Yuan Shen + 10 more

Ethical considerations have become a central topic in education since artificial intelligence (AI) brought both great innovation and challenges to educational practices and systems. Values influence what we believe is morally right and guide how we behave ethically in different situations. However, there is limited empirical research on improving the alignment between the values embedded in technology and the values prioritised by learners. Using the approach of value‐sensitive design (VSD), this study conducted an empirical investigation to explore: (1) how ethical values of learners regarding facilitators were characterised in the online learning environment, (2) how specific features of ChatGPT and human experts as online learning facilitators embody these values and (3) what value tensions occur in the online learning environment. In order to address the research questions, we designed a comparative experiment about online writing and revision facilitated by ChatGPT‐4 and a human expert. We conducted semi‐structured interviews with 59 learners about their learning experiences and feelings after completing the experiment. The results showed that learners prioritised the value of responsiveness, social comfort, autonomy, freedom from bias and privacy during online learning. Compared with the human expert, ChatGPT as a facilitator presented features of tirelessness, friendliness and support for independent decision‐making in embodying the value of social comfort and autonomy. However, ChatGPT struggled to interpret learners' intentions and emotions and posed risks of information leakage, thereby presenting a deficiency in embodying the value of responsiveness and privacy. Value tensions arose both within learners' groups and between learners and other stakeholders, including developers and researchers. These tensions emerged from conflicting ethical values and pragmatic considerations in the online learning environment. Our findings highlight the importance of enhancing value alignment in online learning environments. The strategies for achieving this include developing value‐sensitive AI, leveraging the strengths of AI tools in embodying specific values, and expanding VSD methodology in AI's entire life cycle. Practitioner notes What is already known about this topic Using ChatGPT as an online learning facilitator has been demonstrated to have various advantages, but its use also brings ethical challenges, particularly in aligning its features with the values of learners. Value‐sensitive design (VSD) helps improve value alignment by embedding the values of stakeholders into the technology design. However, the values of learners regarding facilitators in online learning environments remain under investigation. What this paper adds We conducted a comparative experiment to investigate the value characteristics of learners, compare embodied features of AI and human experts, and identify the potential tension of values. We found that learners prioritised the value of responsiveness, social comfort, autonomy, freedom from bias and privacy in the online learning environment. We found that ChatGPT has shown advantages in embodying specific values compared with the human expert, but value tensions and misalignment still emerged during online learning. We found that value tensions not only arose within learner groups but also between learners and other stakeholders, such as developers and researchers. Implications for practice and/or policy Educational technology developers should embed stakeholders' values in AI tools to enhance value alignment and seek a balance between their values and the values of the learners. Educators should actively utilise AI as a powerful tool and maximise its advantages in embodying specific values. Researchers should consider expanding VSD methods to the entire life cycle of AI tools to accommodate value dynamism.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.