Advancing patient education in PRRT through large language models: challenges and potential.

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The increasing use of artificial intelligence (AI) chatbots for patient education raises questions about their accuracy, readability, and conciseness in delivering medical information. This study evaluates the performance of ChatGPT 4o and DeepSeek V3 in answering common patient inquiries about Peptide Receptor Radionuclide Therapy (PRRT). Twelve frequently asked patient questions regarding PRRT were submitted to both chatbots. The responses were assessed by nine professionals using a blinded survey, scoring accuracy, conciseness, and readability on a five-point scale. Statistical analyses included the Mann-Whitney U test for nonparametric data and the Chi-square test for medically incorrect responses. A total of 324 individual assessments were conducted. No significant differences were found in accuracy between ChatGPT 4o (mean 4.43) and DeepSeek V3 (mean 4.56; P = 0.0909) or in readability between ChatGPT 4o (mean 4.38) and DeepSeek V3 (mean 4.25; P = 0.1236). However, ChatGPT 4o provided significantly more concise responses (mean 4.55) compared to DeepSeek V3 (mean 4.24; P = 0.0013). Medically incorrect information defined as accuracy ≤ 3 was present in 7-8% of chatbot responses, with no significant difference between the two models (P = 0.8005). Both AI chatbots demonstrated strong performance in providing medical information on PRRT, with ChatGPT 4o excelling in conciseness. However, the presence of medical inaccuracies highlights the need for physician oversight when using AI chatbots for patient education. Future research should explore methods to enhance AI reliability and personalization in clinical communication.

ReferencesShowing 10 of 11 papers
  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 619
  • 10.1371/journal.pone.0279720
Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA.
  • Mar 14, 2023
  • PloS one
  • Benjamin D Douglas + 2 more

  • Cite Count Icon 1091
  • 10.1056/nejmsr2214184
Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine
  • Mar 30, 2023
  • The New England journal of medicine
  • Peter Lee + 2 more

  • Cite Count Icon 16
  • 10.1097/coc.0000000000001050
Physician Assessment of ChatGPT and Bing Answers to American Cancer Society's Questions to Ask About Your Cancer.
  • Oct 12, 2023
  • American journal of clinical oncology
  • James R Janopaul-Naylor + 5 more

  • Open Access Icon
  • Cite Count Icon 272
  • 10.1001/jamanetworkopen.2023.36483
Accuracy and Reliability of Chatbot Responses to Physician Questions
  • Oct 2, 2023
  • JAMA Network Open
  • Rachel S Goodman + 34 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 35
  • 10.7759/cureus.57795
Redefining Healthcare With Artificial Intelligence (AI): The Contributions of ChatGPT, Gemini, and Co-pilot.
  • Apr 7, 2024
  • Cureus
  • Anas Alhur

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 69
  • 10.3389/frai.2023.1237704
Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science
  • Oct 31, 2023
  • Frontiers in Artificial Intelligence
  • Chiranjib Chakraborty + 4 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 7
  • 10.3389/fonc.2024.1386718
Performance of ChatGPT-4 and Bard chatbots in responding to common patient questions on prostate cancer 177Lu-PSMA-617 therapy.
  • Jul 12, 2024
  • Frontiers in oncology
  • Gokce Belge Bilgin + 12 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 183
  • 10.7759/cureus.37432
Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References.
  • Apr 11, 2023
  • Cureus
  • Sai Anirudh Athaluri + 5 more

  • Cite Count Icon 211
  • 10.1148/radiol.230922
How AI Responds to Common Lung Cancer Questions: ChatGPT versus Google Bard
  • Jun 1, 2023
  • Radiology
  • Amir Ali Rahsepar + 5 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 9
  • 10.1007/s11912-024-01526-5
Radionuclide Theranostics in Neuroendocrine Neoplasms: An Update
  • Jan 1, 2024
  • Current Oncology Reports
  • Martina Di Franco + 4 more

Similar Papers
  • Research Article
  • 10.7759/cureus.84124
Assessing the Accuracy of Artificial Intelligence Chatbots in the Diagnosis and Management of Meniscal Tears.
  • May 14, 2025
  • Cureus
  • Jason S Defrancisis + 5 more

Artificial intelligence (AI) chatbots have emerged as readily accessible tools for providing medical information to the public. However, the accuracy of AI chatbot responses, particularly in specialized medical fields such as orthopaedic surgery, remains largely understudied. This study aims to evaluate the accuracy of responses from two prominent free AI chatbots when posed with frequent questions about meniscus tears, a common orthopaedic injury. The two AI chatbots assessed in this study were ChatGPT-4o and Gemini 2.0 Flash. The analysis focused on the number of statements provided by each chatbot and the percentage of verifiable statements based on UpToDate alone, as well as UpToDate combined with peer-reviewed articles as of March 2025. The results showed no statistically significant difference in the average number of statements generated per question between the two AI chatbots. ChatGPT-4o provided an average of 18.25 statements per question, while Gemini 2.0 Flash generated 19.50 statements per question (p>0.05). Similarly, there was no significant difference in the percentage of verifiable statements provided by each AI chatbot. ChatGPT-4o achieved 58.22% verifiable statements compared to Gemini 2.0 Flash's 58.97% when using UpToDate as the sole verification source, and 83.56% versus 84.62%, respectively, when incorporating both UpToDate and peer-reviewed articles as verification sources (p>0.05). However, a statistically significant difference in the percentage of verifiable statements was observed based on the verification source used. UpToDate alone resulted in 58.61% of verifiable statements, while combining UpToDate and peer-reviewed articles increased this percentage to 84.11% (p<0.0001). Overall, the results of this study suggest that there are minimal differences between free AI chatbots in providing orthopaedic medical information. The results also emphasize the importance of utilizing broader verification sources to enhance the accuracy of AI-generated statements. The study indicates that AI chatbots have clinical limitations in their accuracy and understanding of specific orthopaedic conditions. The authors suggest that although AI chatbots can contribute to orthopaedic care and patient education, theyare not capable of replacing the clinical judgment or expertise of orthopaedic surgeons.

  • Front Matter
  • Cite Count Icon 1
  • 10.1016/j.ijoa.2025.104353
Effectiveness of artificial intelligence (AI) chatbots in providing labor epidural analgesia information: are we there yet?
  • May 1, 2025
  • International journal of obstetric anesthesia
  • Paige M Keasler + 2 more

Effectiveness of artificial intelligence (AI) chatbots in providing labor epidural analgesia information: are we there yet?

  • Research Article
  • Cite Count Icon 13
  • 10.1177/15347346241236811
Appropriateness of Artificial Intelligence Chatbots in Diabetic Foot Ulcer Management.
  • Feb 28, 2024
  • The International Journal of Lower Extremity Wounds
  • Makoto Shiraishi + 4 more

Type 2 diabetes is a significant global health concern. It often causes diabetic foot ulcers (DFUs), which affect millions of people and increase amputation and mortality rates. Despite existing guidelines, the complexity of DFU treatment makes clinical decisions challenging. Large language models such as chat generative pretrained transformer (ChatGPT), which are adept at natural language processing, have emerged as valuable resources in the medical field. However, concerns about the accuracy and reliability of the information they provide remain. We aimed to assess the accuracy of various artificial intelligence (AI) chatbots, including ChatGPT, in providing information on DFUs based on established guidelines. Seven AI chatbots were asked clinical questions (CQs) based on the DFU guidelines. Their responses were analyzed for accuracy in terms of answers to CQs, grade of recommendation, level of evidence, and agreement with the reference, including verification of the authenticity of the references provided by the chatbots. The AI chatbots showed a mean accuracy of 91.2% in answers to CQs, with discrepancies noted in grade of recommendation and level of evidence. Claude-2 outperformed other chatbots in the number of verified references (99.6%), whereas ChatGPT had the lowest rate of reference authenticity (66.3%). This study highlights the potential of AI chatbots as tools for disseminating medical information and demonstrates their high degree of accuracy in answering CQs related to DFUs. However, the variability in the accuracy of these chatbots and problems like AI hallucinations necessitate cautious use and further optimization for medical applications. This study underscores the evolving role of AI in healthcare and the importance of refining these technologies for effective use in clinical decision-making and patient education.

  • Research Article
  • 10.1158/1538-7445.am2025-4909
Abstract 4909: Artificial intelligence (AI) chatbots and their reponses to most searched Spanish cancer questions
  • Apr 21, 2025
  • Cancer Research
  • En Cheng + 6 more

Background: AI chatbots are predominantly trained on English contents. They perform well in answering English cancer questions, but their performance in other languages (such as Spanish) is unknown. Spanish-speaking patients are also concerned that they must use the paywall versions to get better responses, which may exacerbate existing cancer disparities. Methods: We evaluated the responses of AI chatbots to most searched Spanish cancer questions. Using Google Trends (1/1/2020-1/1/2024), we identified the top 5 most searched Spanish cancer questions related to the top 3 common cancers in US Hispanics/Latinos. We selected 6 popular AI chatbots (free and paywall versions of ChatGPT, Claude, and Gemini) and then generated 90 Spanish responses. Board-certified oncologists speaking native Spanish assessed the quality using DISCERN Instrument (score from 1 [low quality] to 5 [high quality]), actionability using Patient Education Materials Assessment Tool (score from 0 [no clear action suggestions] to 100% [clear action suggestions]), readability using Fernández Huerta Reading Grade Level (score from 1 [1st grade] to 13 [college]). Results: The quality of overall AI chatbot responses was moderate (mean [95% CI]: 3.5 [3.4-3.6]). The actionability was low (mean [95% CI]: 35.6% [30.8%-40.3%]), and the readability was high-school level (mean [95% CI]: 9.2 [8.8-9.6] grade). The performance of quality, actionability, and readability did not differ by free and paywall versions (P &amp;gt;0.05). Conclusions: AI chatbots provided moderately accurate information for most searched Spanish cancer-related questions. The responses were not readily actionable and written at the high-school level, which was not concordant with the American Medical Association’s recommendation (6th grade or lower). The performance did not improve by using the paywall versions. Relevance: To reduce cancer disparities in health literacy, AI chatbots need improvement in responding to Spanish cancer questions. Citation Format: En Cheng, Jesus D. Anampa, Carolina Bernabe-Ramirez, Juan Lin, Xiaonan Xue, Alyson B. Moadel-Robblee, Edward Chu. Artificial intelligence (AI) chatbots and their reponses to most searched Spanish cancer questions [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 4909.

  • Abstract
  • 10.1177/2473011424s00124
Acute Achilles Tendon Ruptures: How Well Can Artificial Intelligence Chatbots Answer Patient Inquiries?
  • Oct 1, 2024
  • Foot & Ankle Orthopaedics
  • Wojciech Dzieza + 5 more

Category:Sports; TraumaIntroduction/Purpose:Artificial intelligence (AI) chatbots have recently gained popularity as a source of information that can be easily accessed by patients given their human-like responses to prompts and questions. Within orthopaedics, the treatment of acute Achilles tendon ruptures is not uniform due to varying surgical repair techniques, postoperative protocols, and nonoperative treatment options dependent on surgeon preference and patient factors. Given that patients are increasingly turning toward AI for questions about medical diagnoses and treatment options, our study looked to compare the adequacy of AI chatbot responses to frequently asked questions regarding acute Achilles tendon ruptures.Methods:Three popular AI platforms (ChatGPT, Google Gemini, and Microsoft Bing AI) were prompted for a concise response to ten commonly asked questions regarding Achilles tendon rupture management (Table 1). Four board-certified subspecialty-trained orthopaedic surgeons (two in foot and ankle, two in sports medicine) were asked to assess the value of the AI response using a four-point scale (1 – satisfactory; 2 – satisfactory requiring minimal clarification; 3 – satisfactory requiring substantial clarification; 4 – unsatisfactory). A Kruskal-Wallis test was used to compare the responses between the three AI platforms using the scores assigned by the surgeons.Results:All three AI chatbots provided comparable answers to 7 of 10 questions (70%). Of all the responses (30 total), only two (6.7%) had a mean rating of 3 or higher. Significant differences were noted between the AI systems for questions 4 [H(2) = 7.258, p = .027], 7 [H(2) = 6.308, p = .043], and 10 [H(2) = 6.796, p = .033]. Post hoc analyses revealed Bing AI had significantly worse scores as compared to ChatGPT for all three of these questions.Conclusion:AI chatbots can appropriately answer concise prompts about the diagnosis and management of acute Achilles tendon ruptures often sought out by patients prior to or after evaluation by an orthopaedic surgeon. The responses provided by the three AI chatbots analyzed in our study were uniform and satisfactory, with only one of the platforms scoring worse on three of the ten questions. As AI chatbots advance, they will become a valuable tool for patient education in orthopaedics. Future studies will be needed to assess performance as new AI chatbots develop and large language models continue to evolve.Table 1: List of 10 selected frequently asked questions regarding acute Achilles tendon ruptures

  • Research Article
  • 10.3991/ijep.v15i5.56681
The Role of AI Chatbots in Engineering Education: Experimental Findings and Implementation Strategies
  • Jul 24, 2025
  • International Journal of Engineering Pedagogy (iJEP)
  • Raivo Sell + 3 more

In the field of education, the recent revolution in the large language model (LLM) space has enabled a whole host of interesting applications, such as content generation, support, and even personalized learning. While there are many ad-hoc experiments in flight, scientific studies on the effectiveness of these techniques have been limited. In order to increase the scientific rigor and potential for experimental reproducibility, the Tallinn University of Technology (TalTech) team deployed an artificial intelligence (AI) chatbot within the context of a traditional mainstream mechanics physics course and instrumented the class to facilitate a scientific study on utility. The AI chatbot focused on course support and tutoring in the Estonian language, and the scientific design-for-experiment focused on impact for students, instructors, and course designers. The study revealed measurable gains in instructor productivity and student access. The study also demonstrated the expected need for additional due diligence required to manage AI hallucinations. Perhaps most interestingly, the study revealed the unexpected benefits of cataloguing student chat interactions as a rich data source for the development of instructional materials and future course design. In fact, LLMs were also very useful to evaluate these AI chatbot conversations. Overall, this scientific study provides insights for the educational community into the leverage of using AI chatbots for instruction and in dramatically increasing access by enabling the use of a local language.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1007/s10734-024-01288-w
Generative AI chatbots in higher education: a review of an emerging research area
  • Aug 24, 2024
  • Higher Education
  • Cormac Mcgrath + 2 more

Artificial intelligence (AI) chatbots trained on large language models are an example of generative AI which brings promises and threats to the higher education sector. In this study, we examine the emerging research area of AI chatbots in higher education (HE), focusing specifically on empirical studies conducted since the release of ChatGPT. Our review includes 23 research articles published between December 2022 and December 2023 exploring the use of AI chatbots in HE settings. We take a three-pronged approach to the empirical data. We first examine the state of the emerging field of AI chatbots in HE. Second, we identify the theories of learning used in the empirical studies on AI chatbots in HE. Third, we scrutinise the discourses of AI in HE framing the latest empirical work on AI chatbots. Our findings contribute to a better understanding of the eclectic state of the nascent research area of AI chatbots in HE, the lack of common conceptual groundings about human learning, and the presence of both dystopian and utopian discourses about the future role of AI chatbots in HE.

  • Research Article
  • 10.1108/jabs-12-2024-0664
Mapping the evolution of artificial intelligence (AI) chatbot in marketing: a bibliometric analysis
  • Oct 7, 2025
  • Journal of Asia Business Studies
  • Santosh Kumar + 3 more

Purpose Many companies invest in artificial intelligence (AI) chatbots to create new-age interactive platforms for consumers to achieve business goals. The research on AI chatbots from a marketing perspective is scant and scattered across the sectors. This paper aims to analyse extant research on AI chatbots in the marketing context, providing insights on leading work, journals, institutions, authors, trends and future research directions. Design/methodology/approach This study used the Scopus database to identify 242 articles published between 1996 and 2023 on AI chatbots in the area of business management and decision sciences. This bibliometric analysis used VOS viewer software to analyse the publication and citation structure, co-authorship, collaboration network of institutions and countries, keyword co-occurrence and bibliographic coupling. Findings The study provides valuable insights from the most cited articles, shedding light on their contribution to AI chatbot research in the marketing area. It also highlighted the publication trends, notable authors, journals and bibliographic analyses to identify key trends in AI chatbot-oriented marketing. The result reveals that consumer-oriented chatbot research is presently focused on understanding consumer perception of chatbots. Consumer chatbot experience and engagement are future research areas for AI chatbots in the marketing domain. The bibliometric analysis unveils that research on AI chatbot role in marketing is currently in nascent stages and there is limited intellectual exchange to understand the consumer intention toward chatbot use. Research limitations/implications This study not only provides a comprehensive overview of AI chatbot research in marketing during the past 27 years but also suggests future opportunities for researchers to work on AI chatbots in a marketing context. To further enhance the comprehensiveness of data collection, it is recommended to include another source like the Web of Science, which is among the largest research databases. Originality/value The research contributes significantly to the study of the extant research on AI chatbots in marketing from the Scopus database for the period from 1996 to 2023. This is probably the most comprehensive bibliometric analysis conducted to understand the status of research on AI chatbots and identify trends and future research directions. This research helps in coordinating intellectual networks among institutions, authors and countries.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 44
  • 10.7759/cureus.40922
Artificial Intelligence (AI) Chatbots in Medicine: A Supplement, Not a Substitute
  • Jun 25, 2023
  • Cureus
  • Ibraheem Altamimi + 4 more

This editorial discusses the role of artificial intelligence (AI) chatbots in the healthcare sector, emphasizing their potential as supplements rather than substitutes for medical professionals. While AI chatbots have demonstrated significant potential in managing routine tasks, processing vast amounts of data, and aiding in patient education, they still lack the empathy, intuition, and experience intrinsic to human healthcare providers. Furthermore, the deployment of AI in medicine brings forth ethical and legal considerations that require robust regulatory measures. As we move towards the future, the editorial underscores the importance of a collaborative model, wherein AI chatbots and medical professionals work together to optimize patient outcomes. Despite the potential for AI advancements, the likelihood of chatbots completely replacing medical professionals remains low, as the complexity of healthcare necessitates human involvement. The ultimate aim should be to use technology like AI chatbots to enhance patient care and outcomes, not to replace the irreplaceable human elements of healthcare.

  • Research Article
  • Cite Count Icon 128
  • 10.1001/jamaoncol.2023.2947
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer
  • Aug 24, 2023
  • JAMA oncology
  • Alexander Pan + 4 more

Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments. To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots. This cross-sectional study assessed AI chatbots' text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft). Google Trends' top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots. The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score. The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score. Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/informatics11020020
Artificial Intelligence Chatbots in Chemical Information Seeking: Narrative Educational Insights via a SWOT Analysis
  • Apr 18, 2024
  • Informatics
  • Johannes Pernaa + 5 more

Artificial intelligence (AI) chatbots are next-word predictors built on large language models (LLMs). There is great interest within the educational field for this new technology because AI chatbots can be used to generate information. In this theoretical article, we provide educational insights into the possibilities and challenges of using AI chatbots. These insights were produced by designing chemical information-seeking activities for chemistry teacher education which were analyzed via the SWOT approach. The analysis revealed several internal and external possibilities and challenges. The key insight is that AI chatbots will change the way learners interact with information. For example, they enable the building of personal learning environments with ubiquitous access to information and AI tutors. Their ability to support chemistry learning is impressive. However, the processing of chemical information reveals the limitations of current AI chatbots not being able to process multimodal chemical information. There are also ethical issues to address. Despite the benefits, wider educational adoption will take time. The diffusion can be supported by integrating LLMs into curricula, relying on open-source solutions, and training teachers with modern information literacy skills. This research presents theory-grounded examples of how to support the development of modern information literacy skills in the context of chemistry teacher education.

  • Research Article
  • Cite Count Icon 22
  • 10.1111/eje.13009
Artificial intelligence chatbots and large language models in dental education: Worldwide survey of educators.
  • Apr 8, 2024
  • European journal of dental education : official journal of the Association for Dental Education in Europe
  • Sergio E Uribe + 13 more

Interest is growing in the potential of artificial intelligence (AI) chatbots and large language models like OpenAI's ChatGPT and Google's Gemini, particularly in dental education. To explore dental educators' perceptions of AI chatbots and large language models, specifically their potential benefits and challenges for dental education. A global cross-sectional survey was conducted in May-June 2023 using a 31-item online-questionnaire to assess dental educators' perceptions of AI chatbots like ChatGPT and their influence on dental education. Dental educators, representing diverse backgrounds, were asked about their use of AI, its perceived impact, barriers to using chatbots, and the future role of AI in this field. 428 dental educators (survey views = 1516; response rate = 28%) with a median [25/75th percentiles] age of 45 [37, 56] and 16 [8, 25] years of experience participated, with the majority from the Americas (54%), followed by Europe (26%) and Asia (10%). Thirty-one percent of respondents already use AI tools, with 64% recognising their potential in dental education. Perception of AI's potential impact on dental education varied by region, with Africa (4[4-5]), Asia (4[4-5]), and the Americas (4[3-5]) perceiving more potential than Europe (3[3-4]). Educators stated that AI chatbots could enhance knowledge acquisition (74.3%), research (68.5%), and clinical decision-making (63.6%) but expressed concern about AI's potential to reduce human interaction (53.9%). Dental educators' chief concerns centred around the absence of clear guidelines and training for using AI chatbots. A positive yet cautious view towards AI chatbot integration in dental curricula is prevalent, underscoring the need for clear implementation guidelines.

  • Research Article
  • Cite Count Icon 202
  • 10.1111/bjet.13334
Do AI chatbots improve students learning outcomes? Evidence from a meta‐analysis
  • May 3, 2023
  • British Journal of Educational Technology
  • Rong Wu + 1 more

Artificial intelligence (AI) chatbots are gaining increasing popularity in education. Due to their increasing popularity, many empirical studies have been devoted to exploring the effects of AI chatbots on students' learning outcomes. The proliferation of experimental studies has highlighted the need to summarize and synthesize the inconsistent findings about the effects of AI chatbots on students' learning outcomes. However, few reviews focused on the meta‐analysis of the effects of AI chatbots on students' learning outcomes. The present study performed a meta‐analysis of 24 randomized studies utilizing Stata software (version 14). The main goal of the current study was to meta‐analytically examine the effects of AI chatbots on students' learning outcomes and the moderating effects of educational levels and intervention duration. The results indicated that AI chatbots had a large effect on students' learning outcomes. Moreover, AI chatbots had a greater effect on students in higher education, compared to those in primary education and secondary education. In addition, short interventions were found to have a stronger effect on students' learning outcomes than long interventions. It could be explained by the argument that the novelty effects of AI chatbots could improve learning outcomes in short interventions, but it has worn off in the long interventions. Future designers and educators should make attempt to increase students' learning outcomes by equipping AI chatbots with human‐like avatars, gamification elements and emotional intelligence. Practitioner notesWhat is already known about this topic In recent years, artificial intelligence (AI) chatbots have been gaining increasing popularity in education. Studies undertaken so far have provided conflicting evidence concerning the effects of AI chatbots on students' learning outcomes. There has remained a paucity of meta‐analyses synthesizing the contradictory findings about the effects of AI chatbots on students' learning outcomes. What this paper adds This study, through meta‐analysis, synthesized these recent findings about the effects of AI chatbots on students' learning outcomes. This study found that AI chatbots could have a large effect on students' learning outcomes. This study found that the effects of AI chatbots were moderated by educational levels and intervention duration. Implications for practice and/or policy AI chatbot designers could make AI chatbots better by equipping AI chatbots with human‐like avatars, gamification elements and emotional intelligence Practitioners and/or teachers should draw attention to the positive and negative effects of AI chatbots on students. Considering the importance of ChatGPT, more research is required to develop a better understanding of the effects of ChatGPT in education. More research is needed to examine the mechanisms underlying the effects of AI chatbots on students' learning outcomes.

  • Research Article
  • Cite Count Icon 36
  • 10.1108/jsm-04-2022-0126
Business types matter: new insights into the effects of anthropomorphic cues in AI chatbots
  • Jun 7, 2023
  • Journal of Services Marketing
  • Kibum Youn + 1 more

Purpose This paper aims to examine the relationships between anthropomorphic cues (i.e. degrees of the humanized profile picture and naming) in artificial intelligence (AI) chatbots and business types (utilitarian-centered business vs hedonic-centered business) on consumers’ attitudes toward the AI chatbot and intentions to use the AI chatbot app and to accept the AI chatbot’s recommendation. Design/methodology/approach An online experiment with a 2 (humanized profile pictures: low [semihumanoid] vs high [full-humanoid]) × 2 (naming: Mary vs virtual assistant) × 2 (business types: utilitarian-centered business [bank] vs hedonic-centered business [café]) between-subjects design (N = 520 Mturk samples) was used. Findings The results of this study show significant main effects of anthropomorphic cues (i.e. degrees of profile picture and naming) in AI chatbots and three-way interactions among humanized profile pictures, naming and business types (utilitarian-centered business vs hedonic-centered business) on consumers’ attitudes toward the AI chatbot, intentions to use the AI chatbot app and intentions to accept the AI chatbot’s recommendation. This indicates that the high level of anthropomorphism generates more positive attitudes toward the AI chatbot and intentions to use the AI chatbot app and to accept the AI chatbot’s recommendation in the hedonic-centered business condition. Moreover, the mediated role of parasocial interaction occurs in this relationship. Originality/value This study is the original endeavor to examine the moderating role of business types influencing the effect of anthropomorphism on consumers’ responses, while existing literature overweighted the value of anthropomorphism in AI chatbots without considering the variation of businesses.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/antibiotics14010060
The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review.
  • Jan 9, 2025
  • Antibiotics (Basel, Switzerland)
  • Ninel Iacobus Antonie + 4 more

Background/Objectives: Antimicrobial resistance represents a growing global health crisis, demanding innovative approaches to improve antibiotic stewardship. Artificial intelligence (AI) chatbots based on large language models have shown potential as tools to support clinicians, especially non-specialists, in optimizing antibiotic therapy. This review aims to synthesize current evidence on the capabilities, limitations, and future directions for AI chatbots in enhancing antibiotic selection and patient outcomes. Methods: A narrative review was conducted by analyzing studies published in the last five years across databases such as PubMed, SCOPUS, Web of Science, and Google Scholar. The review focused on research discussing AI-based chatbots, antibiotic stewardship, and clinical decision support systems. Studies were evaluated for methodological soundness and significance, and the findings were synthesized narratively. Results: Current evidence highlights the ability of AI chatbots to assist in guideline-based antibiotic recommendations, improve medical education, and enhance clinical decision-making. Promising results include satisfactory accuracy in preliminary diagnostic and prescriptive tasks. However, challenges such as inconsistent handling of clinical nuances, susceptibility to unsafe advice, algorithmic biases, data privacy concerns, and limited clinical validation underscore the importance of human oversight and refinement. Conclusions: AI chatbots have the potential to complement antibiotic stewardship efforts by promoting appropriate antibiotic use and improving patient outcomes. Realizing this potential will require rigorous clinical trials, interdisciplinary collaboration, regulatory clarity, and tailored algorithmic improvements to ensure their safe and effective integration into clinical practice.

More from: American journal of nuclear medicine and molecular imaging
  • Research Article
  • 10.62347/gldl6616
Maxillary sinus inflammation assessment using FDG-PET/CT in head and neck cancer patients with photon, proton, and combined radiation therapy.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Om H Gandhi

  • Research Article
  • 10.62347/jxly1661
Research process of PET tracers for neuroendocrine tumors diagnosis.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Xiangyuan Bao + 3 more

  • Research Article
  • 10.62347/oahp6281
Advancing patient education in PRRT through large language models: challenges and potential.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Tilman Speicher + 6 more

  • Front Matter
  • 10.62347/ergq2963
Novel tracers and emerging targets for positron emission tomography in Alzheimer's disease and related dementias.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Taoqian Zhao + 1 more

  • Research Article
  • 10.62347/dcgc3250
The reproducibility of [68Ga]Ga-FAPI-04 PET uptake parameters at 15 min and 60 min post-injection.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Hongyu Meng

  • Research Article
  • 10.62347/xdlp4069
Effect of truncating blood sampling in measuring glomerular filtration rate.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Kenneth J Nichols + 4 more

  • Front Matter
  • 10.62347/kjlm2547
Streamlining first-in-human PET radiopharmaceutical development: FDA's evolving stance on preclinical dosimetry.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Taoqian Zhao + 1 more

  • Research Article
  • 10.62347/ghka7738
Trop2-targeted immunoPET ligands.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Steven H Liang

  • Front Matter
  • 10.62347/vhyy2134
CAIX-targeted PET imaging agents based on acetazolamide small molecule for clear cell renal cell carcinoma.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Chongjiao Li + 2 more

  • Research Article
  • 10.62347/axtl7711
Automatic synthesis of a phosphodiesterase 4B (PDE4B) radioligand and PET imaging in depression rodent models.
  • Jan 1, 2025
  • American journal of nuclear medicine and molecular imaging
  • Chenchen Dong

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon