Republication de : Assessing the diagnostic capacity of artificial intelligence chatbots for dysphonia types: model development and validation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Republication de : Assessing the diagnostic capacity of artificial intelligence chatbots for dysphonia types: model development and validation

Similar Papers
  • Research Article
  • 10.1016/j.anorl.2025.01.001
Assessing the diagnostic capacity of artificial intelligence chatbots for dysphonia types: Model development and validation.
  • Jul 1, 2025
  • European annals of otorhinolaryngology, head and neck diseases
  • S Saeedi + 1 more

Assessing the diagnostic capacity of artificial intelligence chatbots for dysphonia types: Model development and validation.

  • Research Article
  • 10.71458/mgsvgw39
Unpacking the Adoption of Artificial Intelligence chatbots by students in tertiary institutions in Mashonaland Central, Zimbabwe
  • Nov 6, 2025
  • Oikos: The Zimbabwe Ezekiel Guti University bulletin of Ecology, Science Technology, Agriculture, Food Systems Review and Advancement
  • Takudzwa Chidembo + 2 more

This research article unpacks the perceptions of the academic community on the adoption of artificial intelligence (AI) chatbots (chatting robots) by tertiary students in Zimbabwean universities. The article seeks to understand the usage of AI chatbots in education, their opportunities, challenges, concerns and prospects of using AI chatbots in educational settings. The research findings revolve around the perceptions and scepticism of the adoption of AI chatbots in education, as seen from students, lecturers and librarians developing higherorder cognitive skills. The main objectives were to identify the main AI chatbots commonly used by tertiary students, to explore the opportunities of adopting of AI chatbots to students and to expose the pitfalls associated with the usage of AI by tertiary students. Participants were drawn from tertiary students, lecturers and university library staff members. The study employed qualitative methodologies, including in-depth interviews, observational checklist and focus groups. The findings suggest that AI chatbot is both a curse and a blessing to tertiary students. The study reveals that AI chatbots enhance learning experience, enable them to overcome skill gaps, bring insights on assignment writing and aid in exam preparation. The study reveals that AI chatbots foster the development of higher-order cognitive skills by augmenting traditional lectures, test preparation and personalisation. However, pitfalls include plagiarism, outdated information, shallow information, indolent and slothful laziness in students, as well as financial constraints associated with AI chatbots. The study recommends that universities must invest in workshops to train staff and students on the responsible ways of adopting and using AI to reduce the increase of luddites. Universities are recommended to develop referencing systems allowing students to acknowledge using AI chatbots as sources. Tertiary students are also recommended to fuse AI with human capacity, desisting from the culture of relying solely on AI chatbots.

  • Research Article
  • 10.1158/1538-7445.am2025-4909
Abstract 4909: Artificial intelligence (AI) chatbots and their reponses to most searched Spanish cancer questions
  • Apr 21, 2025
  • Cancer Research
  • En Cheng + 6 more

Background: AI chatbots are predominantly trained on English contents. They perform well in answering English cancer questions, but their performance in other languages (such as Spanish) is unknown. Spanish-speaking patients are also concerned that they must use the paywall versions to get better responses, which may exacerbate existing cancer disparities. Methods: We evaluated the responses of AI chatbots to most searched Spanish cancer questions. Using Google Trends (1/1/2020-1/1/2024), we identified the top 5 most searched Spanish cancer questions related to the top 3 common cancers in US Hispanics/Latinos. We selected 6 popular AI chatbots (free and paywall versions of ChatGPT, Claude, and Gemini) and then generated 90 Spanish responses. Board-certified oncologists speaking native Spanish assessed the quality using DISCERN Instrument (score from 1 [low quality] to 5 [high quality]), actionability using Patient Education Materials Assessment Tool (score from 0 [no clear action suggestions] to 100% [clear action suggestions]), readability using Fernández Huerta Reading Grade Level (score from 1 [1st grade] to 13 [college]). Results: The quality of overall AI chatbot responses was moderate (mean [95% CI]: 3.5 [3.4-3.6]). The actionability was low (mean [95% CI]: 35.6% [30.8%-40.3%]), and the readability was high-school level (mean [95% CI]: 9.2 [8.8-9.6] grade). The performance of quality, actionability, and readability did not differ by free and paywall versions (P >0.05). Conclusions: AI chatbots provided moderately accurate information for most searched Spanish cancer-related questions. The responses were not readily actionable and written at the high-school level, which was not concordant with the American Medical Association’s recommendation (6th grade or lower). The performance did not improve by using the paywall versions. Relevance: To reduce cancer disparities in health literacy, AI chatbots need improvement in responding to Spanish cancer questions. Citation Format: En Cheng, Jesus D. Anampa, Carolina Bernabe-Ramirez, Juan Lin, Xiaonan Xue, Alyson B. Moadel-Robblee, Edward Chu. Artificial intelligence (AI) chatbots and their reponses to most searched Spanish cancer questions [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 4909.

  • Research Article
  • Cite Count Icon 71
  • 10.1016/j.eururo.2023.07.004
How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies?
  • Aug 10, 2023
  • European urology
  • David Musheyev + 3 more

How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies?

  • Research Article
  • Cite Count Icon 202
  • 10.1111/bjet.13334
Do AI chatbots improve students learning outcomes? Evidence from a meta‐analysis
  • May 3, 2023
  • British Journal of Educational Technology
  • Rong Wu + 1 more

Artificial intelligence (AI) chatbots are gaining increasing popularity in education. Due to their increasing popularity, many empirical studies have been devoted to exploring the effects of AI chatbots on students' learning outcomes. The proliferation of experimental studies has highlighted the need to summarize and synthesize the inconsistent findings about the effects of AI chatbots on students' learning outcomes. However, few reviews focused on the meta‐analysis of the effects of AI chatbots on students' learning outcomes. The present study performed a meta‐analysis of 24 randomized studies utilizing Stata software (version 14). The main goal of the current study was to meta‐analytically examine the effects of AI chatbots on students' learning outcomes and the moderating effects of educational levels and intervention duration. The results indicated that AI chatbots had a large effect on students' learning outcomes. Moreover, AI chatbots had a greater effect on students in higher education, compared to those in primary education and secondary education. In addition, short interventions were found to have a stronger effect on students' learning outcomes than long interventions. It could be explained by the argument that the novelty effects of AI chatbots could improve learning outcomes in short interventions, but it has worn off in the long interventions. Future designers and educators should make attempt to increase students' learning outcomes by equipping AI chatbots with human‐like avatars, gamification elements and emotional intelligence. Practitioner notesWhat is already known about this topic In recent years, artificial intelligence (AI) chatbots have been gaining increasing popularity in education. Studies undertaken so far have provided conflicting evidence concerning the effects of AI chatbots on students' learning outcomes. There has remained a paucity of meta‐analyses synthesizing the contradictory findings about the effects of AI chatbots on students' learning outcomes. What this paper adds This study, through meta‐analysis, synthesized these recent findings about the effects of AI chatbots on students' learning outcomes. This study found that AI chatbots could have a large effect on students' learning outcomes. This study found that the effects of AI chatbots were moderated by educational levels and intervention duration. Implications for practice and/or policy AI chatbot designers could make AI chatbots better by equipping AI chatbots with human‐like avatars, gamification elements and emotional intelligence Practitioners and/or teachers should draw attention to the positive and negative effects of AI chatbots on students. Considering the importance of ChatGPT, more research is required to develop a better understanding of the effects of ChatGPT in education. More research is needed to examine the mechanisms underlying the effects of AI chatbots on students' learning outcomes.

  • Research Article
  • 10.1108/jabs-12-2024-0664
Mapping the evolution of artificial intelligence (AI) chatbot in marketing: a bibliometric analysis
  • Oct 7, 2025
  • Journal of Asia Business Studies
  • Santosh Kumar + 3 more

Purpose Many companies invest in artificial intelligence (AI) chatbots to create new-age interactive platforms for consumers to achieve business goals. The research on AI chatbots from a marketing perspective is scant and scattered across the sectors. This paper aims to analyse extant research on AI chatbots in the marketing context, providing insights on leading work, journals, institutions, authors, trends and future research directions. Design/methodology/approach This study used the Scopus database to identify 242 articles published between 1996 and 2023 on AI chatbots in the area of business management and decision sciences. This bibliometric analysis used VOS viewer software to analyse the publication and citation structure, co-authorship, collaboration network of institutions and countries, keyword co-occurrence and bibliographic coupling. Findings The study provides valuable insights from the most cited articles, shedding light on their contribution to AI chatbot research in the marketing area. It also highlighted the publication trends, notable authors, journals and bibliographic analyses to identify key trends in AI chatbot-oriented marketing. The result reveals that consumer-oriented chatbot research is presently focused on understanding consumer perception of chatbots. Consumer chatbot experience and engagement are future research areas for AI chatbots in the marketing domain. The bibliometric analysis unveils that research on AI chatbot role in marketing is currently in nascent stages and there is limited intellectual exchange to understand the consumer intention toward chatbot use. Research limitations/implications This study not only provides a comprehensive overview of AI chatbot research in marketing during the past 27 years but also suggests future opportunities for researchers to work on AI chatbots in a marketing context. To further enhance the comprehensiveness of data collection, it is recommended to include another source like the Web of Science, which is among the largest research databases. Originality/value The research contributes significantly to the study of the extant research on AI chatbots in marketing from the Scopus database for the period from 1996 to 2023. This is probably the most comprehensive bibliometric analysis conducted to understand the status of research on AI chatbots and identify trends and future research directions. This research helps in coordinating intellectual networks among institutions, authors and countries.

  • Front Matter
  • Cite Count Icon 1
  • 10.1016/j.ijoa.2025.104353
Effectiveness of artificial intelligence (AI) chatbots in providing labor epidural analgesia information: are we there yet?
  • May 1, 2025
  • International journal of obstetric anesthesia
  • Paige M Keasler + 2 more

Effectiveness of artificial intelligence (AI) chatbots in providing labor epidural analgesia information: are we there yet?

  • Research Article
  • 10.22730/jmls.2024.21.3.92
Effects of the use of a conversational artificial intelligence chatbot on medical students’ patient-centered communication skill development in a metaverse environment
  • Sep 30, 2024
  • Journal of Medicine and Life Science
  • Hyeonmi Hong + 1 more

This study investigated how the use of a conversational artificial intelligence (AI) chatbot improved medical students' patient-centered communication (PCC) skills and how it affected their motivation to learn using innovative interactive tools such as AI chatbots throughout their careers. This study adopted a onegroup post-test-only design to investigate the impact of AI chatbot-based learning on medical students' PCC skills, their learning motivation with AI chatbots, and their perception towards the use of AI chatbots in their learning. After a series of classroom activities, including metaverse exploration, AI chatbot-based learning activities, and classroom discussions, 43 medical students completed three surveys that measured their motivation to learn using AI tools for medical education, their perception towards the use of AI chatbots in their learning, and their self-assessment of their PCC skills. Our findings revealed significant correlations among learning motivation, PCC scores, and perception variables. Notably, the perception towards AI chatbot-based learning and AI chatbot learning motivation showed a very strong positive correlation (r=0.72), indicating that motivated students were more likely to perceive chatbots as beneficial educational tools. Additionally, a moderate correlation between motivation and self-assessed PCC skills (r=0.54) indicated that students motivated to use AI chatbots tended to rate their PCC skills more favorably. Similarly, a positive relationship (r=0.68) between students' perceptions of chatbot usage and their self-assessed PCC skills indicated that enhancing students' perceptions of AI tools could lead to better educational outcomes.

  • Research Article
  • 10.64152/10125/73574
Effects of learner uptake following automatic corrective recast from Artificial Intelligence chatbots on the learning of English caused-motion construction
  • Jun 1, 2024
  • Language Learning & Technology
  • Rakhun Kim

This study investigated the instructional effects of learner uptake following automatic corrective recast from artificial intelligence (AI) chatbots on the learning of the English caused-motion construction. 69 novice-level EFL learners in a Korean high school were recruited to investigate the instructional effects of corrective recast from AI chatbots on the learning of the English caused-motion construction. Results from the elicited writing tasks (EWT) revealed that statistically significant gains were observed in both immediate and delayed posttests for the production of the English caused-motion construction by experimental group participants. Also, the relationship between learner uptake from AI chatbots’ corrective recast and the learning of the English caused-motion construction were analyzed. The results demonstrated that learners’ successful repair from AI chatbots’ corrective recast was positively correlated with the learning gains in the two EWT posttests. The study concludes by highlighting the significance of noticeability in AI chatbots’ corrective feedback for foreign language learning.

  • Abstract
  • 10.1177/2473011424s00124
Acute Achilles Tendon Ruptures: How Well Can Artificial Intelligence Chatbots Answer Patient Inquiries?
  • Oct 1, 2024
  • Foot & Ankle Orthopaedics
  • Wojciech Dzieza + 5 more

Category:Sports; TraumaIntroduction/Purpose:Artificial intelligence (AI) chatbots have recently gained popularity as a source of information that can be easily accessed by patients given their human-like responses to prompts and questions. Within orthopaedics, the treatment of acute Achilles tendon ruptures is not uniform due to varying surgical repair techniques, postoperative protocols, and nonoperative treatment options dependent on surgeon preference and patient factors. Given that patients are increasingly turning toward AI for questions about medical diagnoses and treatment options, our study looked to compare the adequacy of AI chatbot responses to frequently asked questions regarding acute Achilles tendon ruptures.Methods:Three popular AI platforms (ChatGPT, Google Gemini, and Microsoft Bing AI) were prompted for a concise response to ten commonly asked questions regarding Achilles tendon rupture management (Table 1). Four board-certified subspecialty-trained orthopaedic surgeons (two in foot and ankle, two in sports medicine) were asked to assess the value of the AI response using a four-point scale (1 – satisfactory; 2 – satisfactory requiring minimal clarification; 3 – satisfactory requiring substantial clarification; 4 – unsatisfactory). A Kruskal-Wallis test was used to compare the responses between the three AI platforms using the scores assigned by the surgeons.Results:All three AI chatbots provided comparable answers to 7 of 10 questions (70%). Of all the responses (30 total), only two (6.7%) had a mean rating of 3 or higher. Significant differences were noted between the AI systems for questions 4 [H(2) = 7.258, p = .027], 7 [H(2) = 6.308, p = .043], and 10 [H(2) = 6.796, p = .033]. Post hoc analyses revealed Bing AI had significantly worse scores as compared to ChatGPT for all three of these questions.Conclusion:AI chatbots can appropriately answer concise prompts about the diagnosis and management of acute Achilles tendon ruptures often sought out by patients prior to or after evaluation by an orthopaedic surgeon. The responses provided by the three AI chatbots analyzed in our study were uniform and satisfactory, with only one of the platforms scoring worse on three of the ten questions. As AI chatbots advance, they will become a valuable tool for patient education in orthopaedics. Future studies will be needed to assess performance as new AI chatbots develop and large language models continue to evolve.Table 1: List of 10 selected frequently asked questions regarding acute Achilles tendon ruptures

  • Research Article
  • 10.1186/s12889-025-22933-8
Evaluation of artificial intelligence (AI) chatbots for providing sexual health information: a consensus study using real-world clinical queries
  • May 15, 2025
  • BMC Public Health
  • Phyu M Latt + 15 more

IntroductionArtificial Intelligence (AI) chatbots could potentially provide information on sensitive topics, including sexual health, to the public. However, their performance compared to nurses and across different AI chatbots, particularly in the field of sexual health, remains understudied. This study evaluated the performance of three AI chatbots - two prompt-tuned (Alice and Azure) and one standard chatbot (ChatGPT by OpenAI) - in providing sexual health information on questions that experienced sexual health nurses could correctly answer.MethodsWe analysed 195 anonymised sexual health questions received by the Melbourne Sexual Health Centre phone line. A panel of experts in a blinded order using a consensus-based approach evaluated responses to these questions from nurses and the three AI chatbots. Performance was assessed based on overall correctness and five specific measures: guidance, accuracy, safety, ease of access, and provision of necessary information. We conducted subgroup analyses for clinic-specific (e.g., opening hours) and general sexual health questions and a sensitivity analysis excluding questions that Azure could not answer.ResultsAlice demonstrated the highest overall correctness (85.2%; 95% confidence interval (CI), 82.1-88.0%), followed by Azure (69.3%; 95% CI, 65.3-73.0%) and ChatGPT (64.8%; 95% CI, 60.7-68.7%). Prompt-tuned chatbots outperformed the base ChatGPT across all measures. Among all outcome measures, all chatbots performed best on safety, with Azure achieving the highest safety score (97.9%; 95% CI, 96.4-98.9%), indicating the lowest risk of providing potentially harmful advice. In subgroup analysis, all chatbots performed better on general sexual health questions compared to clinic-specific queries. Sensitivity analysis showed a narrower performance gap between Alice and Azure when excluding questions Azure could not answer.ConclusionsPrompt-tuned AI chatbots demonstrated superior performance in providing sexual health information compared to base ChatGPT, with high safety scores particularly noteworthy. However, all AI chatbots showed susceptibility to generating incorrect information. These findings suggest the potential for AI chatbots as adjuncts to human healthcare providers for providing sexual health information while highlighting the need for continued refinement and human oversight. Future research should focus on larger-scale evaluations and real-world implementations.

  • Research Article
  • Cite Count Icon 36
  • 10.1108/jsm-04-2022-0126
Business types matter: new insights into the effects of anthropomorphic cues in AI chatbots
  • Jun 7, 2023
  • Journal of Services Marketing
  • Kibum Youn + 1 more

Purpose This paper aims to examine the relationships between anthropomorphic cues (i.e. degrees of the humanized profile picture and naming) in artificial intelligence (AI) chatbots and business types (utilitarian-centered business vs hedonic-centered business) on consumers’ attitudes toward the AI chatbot and intentions to use the AI chatbot app and to accept the AI chatbot’s recommendation. Design/methodology/approach An online experiment with a 2 (humanized profile pictures: low [semihumanoid] vs high [full-humanoid]) × 2 (naming: Mary vs virtual assistant) × 2 (business types: utilitarian-centered business [bank] vs hedonic-centered business [café]) between-subjects design (N = 520 Mturk samples) was used. Findings The results of this study show significant main effects of anthropomorphic cues (i.e. degrees of profile picture and naming) in AI chatbots and three-way interactions among humanized profile pictures, naming and business types (utilitarian-centered business vs hedonic-centered business) on consumers’ attitudes toward the AI chatbot, intentions to use the AI chatbot app and intentions to accept the AI chatbot’s recommendation. This indicates that the high level of anthropomorphism generates more positive attitudes toward the AI chatbot and intentions to use the AI chatbot app and to accept the AI chatbot’s recommendation in the hedonic-centered business condition. Moreover, the mediated role of parasocial interaction occurs in this relationship. Originality/value This study is the original endeavor to examine the moderating role of business types influencing the effect of anthropomorphism on consumers’ responses, while existing literature overweighted the value of anthropomorphism in AI chatbots without considering the variation of businesses.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 44
  • 10.7759/cureus.40922
Artificial Intelligence (AI) Chatbots in Medicine: A Supplement, Not a Substitute
  • Jun 25, 2023
  • Cureus
  • Ibraheem Altamimi + 4 more

This editorial discusses the role of artificial intelligence (AI) chatbots in the healthcare sector, emphasizing their potential as supplements rather than substitutes for medical professionals. While AI chatbots have demonstrated significant potential in managing routine tasks, processing vast amounts of data, and aiding in patient education, they still lack the empathy, intuition, and experience intrinsic to human healthcare providers. Furthermore, the deployment of AI in medicine brings forth ethical and legal considerations that require robust regulatory measures. As we move towards the future, the editorial underscores the importance of a collaborative model, wherein AI chatbots and medical professionals work together to optimize patient outcomes. Despite the potential for AI advancements, the likelihood of chatbots completely replacing medical professionals remains low, as the complexity of healthcare necessitates human involvement. The ultimate aim should be to use technology like AI chatbots to enhance patient care and outcomes, not to replace the irreplaceable human elements of healthcare.

  • Research Article
  • 10.2196/70034
Evaluating the Usability of an HIV Prevention Artificial Intelligence Chatbot in Malaysia: National Observational Study
  • Jul 15, 2025
  • JMIR Human Factors
  • Zhao Ni + 4 more

BackgroundMalaysia, an upper middle-income country in the Asia-Pacific region, has an HIV epidemic that has transitioned from needle sharing to sexual transmission, mainly in men who have sex with men (MSM). MSM are the most vulnerable population for HIV in Malaysia. In 2022, our team developed a web-based artificial intelligence (AI) chatbot and tested its feasibility and acceptability among MSM in Malaysia to promote HIV testing. To enhance the usability of the AI chatbot, we made it accessible to the public through the website called MYHIV365 and tested it in an observational study.ObjectiveThis study aimed to test the usability of an AI chatbot in promoting HIV testing among MSM living in Malaysia.MethodsThis observational study was conducted from August 2023 to March 2024 among 334 MSM. Participants were recruited through community outreach and social-networking apps using flyers. The interactions between participants and the AI chatbot were documented and retrieved from the chatbot developer’s platform. Data were analyzed following a predefined metrics using R software (Posit Software, PBC).ResultsThe AI chatbot interacted with 334 participants, assisting them in receiving free HIV self-testing kits, offering information on HIV, pre-exposure prophylaxis (PrEP), and mental health, and providing details of 220 MSM-friendly clinics, including their addresses, phone numbers, and operating hours. After the study, 393 human-chatbot interactions were documented on the chatbot developer’s platform. Most participants (304/334, 91.0%) interacted with the AI chatbot once, 30 (9.0%) engaged 2 or more times at different intervals. Participants’ interaction time with the chatbot varied, ranging from 1 to 31 minutes. The AI chatbot properly addressed most participants’ questions (362/393, 92.1%) about HIV and PrEP. However, in 31 interactions, participants posed additional questions to the chatbot that were not programmed into the chatbot algorithms, resulting in unanswered interactions.ConclusionsThe web-based AI chatbot demonstrated high usability in delivering HIV self-testing kits and providing clinical information on HIV testing, PrEP, and mental health services. To enhance its usability in community and clinical settings, the chatbot must offer personalized health information and precise interaction, powered by sophisticated machine learning algorithms. In addition, establishing an effective connection between the AI chatbot and health care systems to eliminate stigma and discrimination toward MSM is crucial for the future implementation of AI chatbots.

  • Research Article
  • Cite Count Icon 128
  • 10.1001/jamaoncol.2023.2947
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer
  • Aug 24, 2023
  • JAMA oncology
  • Alexander Pan + 4 more

Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments. To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots. This cross-sectional study assessed AI chatbots' text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft). Google Trends' top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots. The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score. The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score. Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon