Negotiating Human–AI Complementarity in Geriatric and Palliative Care: A Qualitative Study of Healthcare Practitioners’ Perspectives in Northeast China

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Artificial intelligence (AI) is becoming increasingly significant in healthcare around the world, especially in China, where rapid population ageing coincides with rising expectations for quality of life and a shrinking care workforce. This study explores Chinese health practitioners’ perspectives on using AI assistants in integrated geriatric and palliative care. Drawing on Actor–Network Theory, care is viewed as a network of interconnected human and non-human actors, including practitioners, technologies, patients and policies. Based in Northeast China, a region with structurally marginalised healthcare infrastructure, this article analyses qualitative interviews with 14 practitioners. Our findings reveal three key themes: (1) tensions between AI’s rule-based logic and practitioners’ human-centred approach; (2) ethical discomfort with AI performing intimate or emotionally sensitive care, especially in end-of-life contexts; (3) structural inequalities, with weak policy and infrastructure limiting effective AI integration. The study highlights that AI offers clearer benefits for routine geriatric care, such as monitoring and basic symptom management, but its utility is far more limited in the complex, relational and ethically sensitive domain of palliative care. Proposing a model of human–AI complementarity, the article argues that technology should support rather than replace the emotional and relational aspects of care and identifies policy considerations for ethically grounded integration in resource-limited contexts.

Similar Papers
  • Research Article
  • Cite Count Icon 16
  • 10.1287/msom.2023.0093
Physician Adoption of AI Assistant
  • Jul 17, 2024
  • Manufacturing & Service Operations Management
  • Ting Hou + 3 more

Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart. Funding: This research was supported by a Behavioral Research Assistance Grant from the C. T. Bauer College of Business, University of Houston. H. Zhao acknowledges support from Hong Kong General Research Fund [9043593]. Y. (R.) Tan acknowledges generous support from CEIBS Research [Grant AG24QCS]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0093 .

  • Research Article
  • Cite Count Icon 3
  • 10.1371/journal.pone.0322925
Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.
  • May 7, 2025
  • PloS one
  • Marlina Tanty Ramli Hamid + 6 more

This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

  • Research Article
  • Cite Count Icon 6
  • 10.1001/jamanetworkopen.2025.15672
AI-Assisted vs Unassisted Identification of Prostate Cancer in Magnetic Resonance Images
  • Jun 13, 2025
  • JAMA Network Open
  • Ivo Schoots + 85 more

Artificial intelligence (AI) assistance in magnetic resonance imaging (MRI) assessment for prostate cancer shows promise for improving diagnostic accuracy but lacks large-scale observational evidence. To evaluate whether use of AI-assisted assessment for diagnosing clinically significant prostate cancer (csPCa) on MRI is superior to unassisted readings. This diagnostic study was conducted between March and July 2024 to compare unassisted and AI-assisted diagnostic performance using the AI system developed within the international Prostate Imaging-Cancer AI (PI-CAI) Consortium. The study involved 61 readers (34 experts and 27 nonexperts) from 53 centers across 17 countries. Readers assessed prostate magnetic resonance images both with and without AI assistance, providing Prostate Imaging Reporting and Data System (PI-RADS) annotations from 3 to 5 (higher PI-RADS indicated a higher likelihood of csPCa) and patient-level suspicion scores ranging from 0 to 100 (higher scores indicated a greater likelihood of harboring csPCa). Biparametric prostate MRI examinations were included for 780 men from the PI-CAI study who were included in the newly-conducted observer study. All men within the PI-CAI study had suspicion of harboring prostate cancer, sufficient diagnostic image quality, and no prior clinically significant cancer findings. Disease presence was defined by histopathology, and absence was determined by 3 or more years of follow-up. The AI system was recalibrated using 420 Dutch examinations to generate lesion-detection maps, with AI scores ranging from 1 to 10, in which 10 indicates the highest likelihood of csPCa. The remaining 360 examinations, originating from 3 Dutch centers and 1 Norwegian center, were included in the observer study. The primary outcome was diagnosis of csPCa, evaluated using the area under the receiver operating characteristic curve and sensitivity and specificity at a PI-RADS threshold of 3 or more. The secondary outcomes included analysis at alternate operating points and reader expertise. Among the 360 examinations of 360 men (median age, 65 years [IQR, 62-70 years]) who were included for testing, 122 (34%) harbored csPCa. AI assistance was associated with significantly improved performance, achieving a 3.3% increase in the area under the receiver operating characteristic curve (95% CI, 1.8%-4.9%; P < .001), from 0.882 (95% CI, 0.854-0.910) in unassisted assessments to 0.916 (95% CI, 0.893-0.938) with AI assistance. Sensitivity improved by 2.5% (95% CI, 1.1%-3.9%; P < .001), from 94.3% (95% CI, 91.9%-96.7%) to 96.8% (95% CI, 95.2%-98.5%), and specificity increased by 3.4% (95% CI, 0.8%-6.0%; P = .01), from 46.7% (95% CI, 39.4%-54.0%) to 50.1% (95% CI, 42.5%-57.7%), at a PI-RADS score of 3 or more. Secondary analyses demonstrated similar performance improvements across alternate operating points and a greater benefit of AI assistance for nonexpert readers. The findings of this diagnostic study of patients suspected of harboring prostate cancer suggest that AI assistance was associated with improved radiologic diagnosis of clinically significant disease. Further research is required to investigate the generalization of outcomes and effects on workflow improvement within prospective settings.

  • Research Article
  • 10.1016/j.jdent.2025.106152
Can AI assistants improve time efficiency in digital dataset preparation in virtual implant planning? A comparative study.
  • Oct 1, 2025
  • Journal of dentistry
  • Lucia Schiavon + 5 more

Can AI assistants improve time efficiency in digital dataset preparation in virtual implant planning? A comparative study.

  • Research Article
  • 10.1002/awwa.1166
Hey, Alexa, Is My Water Safe?
  • Oct 1, 2018
  • Journal - American Water Works Association
  • David B Lafrance

Hey, Alexa, Is My Water Safe?

  • Research Article
  • 10.28945/5539
AI Assistance Variants in Software Development Cycles
  • Jan 1, 2025
  • Issues in Informing Science and Information Technology
  • Christine Bakke + 3 more

Aim/Purpose With the technology of artificial intelligence (AI) improving every day it is important to find ways to harness AI in the software development life cycle (SDLC). This research demonstrates how AI tools were incorporated into an upper division Computer Science course to assist with development of various memory games. Background Since ChatGPT’s release in 2022, other companies have released rival chatbots each competing for a piece of the new market. With the plethora of AI options now available, it is important for a developer to learn to use AI as an assistant within the development of a custom project. Methodology The research presented is a multi-case, cross-analysis of four student researchers in a required, senior level Computer Science course. All students were tasked with collecting mixed-methods data on two AI assistants, throughout design and development a unique memory app; then these four students pooled data and conducted a cross-comparative analysis. To prepare for cross analysis, standardized Likert rankings and thematic categories were developed and consistently used during data collections. AI assistants evaluated: Claude, Copilot, ChatGPT Free, and ChatGPT Paid. Throughout the development process, each student provided both of their AI assistants with the same initial queries, the results of which were given a Likert ranking and notes were kept regarding AI accuracy. Individual datasets were examined, then pooled and the combined dataset was used to finalize hypothesis findings. The four student-researchers presented their multi-case, mixed-methods analysis as a snap-shot in time regarding the value of AI as assistants in the development of their projects. Contribution This paper builds on prior research focusing both on student experience and instructional methods in capstone-like courses. This study examines using AIs as assistants as a current trend in Computer Science education. Findings During multi-case analysis, two hypotheses were analyzed against the data of the four student-researchers. The cross examination of data found no statistical significance between the helpfulness of paid vs. free AI as course project assistants; while non-IDE AI assistants performed significantly better than IDE assistants across 7 out of 8 usage type categories. Recommendations for Practitioners Technology instructors can use this research to incorporate AI assistants into advanced courses that focus on building custom software, with cautions that foundational coding skills and knowledge should be in place prior to attempting complex projects. Companies that are researching how AI can be integrated into the software development process can use this research to see preferred strengths of various AI’s, with cautions for use with proprietary data. Recommendations for Researchers Researchers can observe how different AI’s can assist with application development. Further research is encouraged as AI capabilities will continue to evolve. Impact on Society The researchers’ findings show AI in light of its current abilities and limitations in the software development life cycle. While AI assistants excelled in simple to medium complexity debugging tasks, there were many complex tasks where a human coder was preferred over the AI assistants; however, this is expected to change over time. Future Research As future technology strengthens AI some aspects of the study may become historical; however, the core of the research, that of using AI as assistants in development of software projects is expected to remain pertinent to education for some time.

  • Conference Article
  • 10.28945/5540
AI Assistance Variants in Software Development Cycles
  • Jan 1, 2025
  • Michael Callahan + 3 more

Aim/Purpose With the technology of artificial intelligence (AI) improving every day it is important to find ways to harness AI in the software development life cycle (SDLC). This research demonstrates how AI tools were incorporated into an upper division Computer Science course to assist with development of various memory games. Background Since ChatGPT’s release in 2022, other companies have released rival chatbots each competing for a piece of the new market. With the plethora of AI options now available, it is important for a developer to learn to use AI as an assistant within the development of a custom project. Methodology The research presented is a multi-case, cross-analysis of four student researchers in a required, senior level Computer Science course. All students were tasked with collecting mixed-methods data on two AI assistants, throughout design and development a unique memory app; then these four students pooled data and conducted a cross-comparative analysis. To prepare for cross analysis, standardized Likert rankings and thematic categories were developed and consistently used during data collections. AI assistants evaluated: Claude, Copilot, ChatGPT Free, and ChatGPT Paid. Throughout the development process, each student provided both of their AI assistants with the same initial queries, the results of which were given a Likert ranking and notes were kept regarding AI accuracy. Individual datasets were examined, then pooled and the combined dataset was used to finalize hypothesis findings. The four student-researchers presented their multi-case, mixed-methods analysis as a snap-shot in time regarding the value of AI as assistants in the development of their projects. Contribution This paper builds on prior research focusing both on student experience and instructional methods in capstone-like courses. This study examines using AIs as assistants as a current trend in Computer Science education. Findings During multi-case analysis, two hypotheses were analyzed against the data of the four student-researchers. The cross examination of data found no statistical significance between the helpfulness of paid vs. free AI as course project assistants; while non-IDE AI assistants performed significantly better than IDE assistants across 7 out of 8 usage type categories. Recommendations for Practitioners Technology instructors can use this research to incorporate AI assistants into advanced courses that focus on building custom software, with cautions that foundational coding skills and knowledge should be in place prior to attempting complex projects. Companies that are researching how AI can be integrated into the software development process can use this research to see preferred strengths of various AI’s, with cautions for use with proprietary data. Recommendations for Researchers Researchers can observe how different AI’s can assist with application development. Further research is encouraged as AI capabilities will continue to evolve. Impact on Society The researchers’ findings show AI in light of its current abilities and limitations in the software development life cycle. While AI assistants excelled in simple to medium complexity debugging tasks, there were many complex tasks where a human coder was preferred over the AI assistants; however, this is expected to change over time. Future Research As future technology strengthens AI some aspects of the study may become historical; however, the core of the research, that of using AI as assistants in development of software projects is expected to remain pertinent to education for some time.

  • Research Article
  • 10.1007/s00330-025-11820-w
Impact of AI assistance on radiologist interpretation of knee MRI.
  • Jul 31, 2025
  • European radiology
  • Guillaume Herpe + 9 more

Knee injuries frequently require Magnetic Resonance Imaging (MRI) evaluation, increasing radiologists' workload. This study evaluates the impact of a Knee AI assistant on radiologists' diagnostic accuracy and efficiency in detecting anterior cruciate ligament (ACL), meniscus, cartilage, and medial collateral ligament (MCL) lesions on knee MRI exams. This retrospective reader study was conducted from January 2024 to April 2024. Knee MRI studies were evaluated with and without AI assistance by six radiologists with between 2 and 10 years of experience in musculoskeletal imaging in two sessions, 1 month apart. The AI algorithm was trained on 23,074 MRI studies separate from the study dataset and tested on various knee structures, including ACL, MCL, menisci, and cartilage. The reference standard was established by the consensus of three expert MSK radiologists. Statistical analysis included sensitivity, specificity, accuracy, and Fleiss' Kappa. The study dataset involved 165 knee MRIs (89 males, 76 females; mean age, 42.3 ± 15.7 years). AI assistance improved sensitivity from 81% (134/165, 95% CI = [79.7, 83.3]) to 86%(142/165, 95% CI = [84.2, 87.5]) (p < 0.001), accuracy from 86% (142/165, 95% CI = [85.4, 86.9]) to 91%(150/165, 95% CI = [90.7, 92.1]) (p < 0.001), and specificity from 88% (145/165, 95% CI = [87.1, 88.5]) to 93% (153/165, 95% CI = [92.7, 93.8]) (p < 0.001). Sensitivity and accuracy improvements were observed across all knee structures with varied statistical significance ranging from < 0.001 to 0.28. The Fleiss' Kappa values among readers increased from 54% (95% CI = [53.0, 55.3]) to 78% (95% CI = [76.6, 79.0]) (p < 0.001) post-AI integration. The integration of AI improved diagnostic accuracy, efficiency, and inter-reader agreement in knee MRI interpretation, highlighting the value of this approach in clinical practice. Question Can artificial intelligence (AI) assistance improve the diagnostic accuracy and efficiency of radiologists in detecting main lesions anterior cruciate ligament, meniscus, cartilage, and medial collateral ligament lesions in knee MRI? Findings AI assistance in knee MRI interpretation increased radiologists' sensitivity from 81 to 86% and accuracy from 86 to 91% for detecting knee lesions while improving inter-reader agreement (p < 0.001). Clinical relevance AI-assisted knee MRI interpretation enhances diagnostic precision and consistency among radiologists, potentially leading to more accurate injury detection, improved patient outcomes, and reduced diagnostic variability in musculoskeletal imaging.

  • Research Article
  • Cite Count Icon 44
  • 10.1148/radiol.230860
Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs.
  • Dec 1, 2023
  • Radiology
  • Souhail Bennani + 15 more

Background Chest radiography remains the most common radiologic examination, and interpretation of its results can be difficult. Purpose To explore the potential benefit of artificial intelligence (AI) assistance in the detection of thoracic abnormalities on chest radiographs by evaluating the performance of radiologists with different levels of expertise, with and without AI assistance. Materials and Methods Patients who underwent both chest radiography and thoracic CT within 72 hours between January 2010 and December 2020 in a French public hospital were screened retrospectively. Radiographs were randomly included until reaching 500 radiographs, with about 50% of radiographs having abnormal findings. A senior thoracic radiologist annotated the radiographs for five abnormalities (pneumothorax, pleural effusion, consolidation, mediastinal and hilar mass, lung nodule) based on the corresponding CT results (ground truth). A total of 12 readers (four thoracic radiologists, four general radiologists, four radiology residents) read half the radiographs without AI and half the radiographs with AI (ChestView; Gleamer). Changes in sensitivity and specificity were measured using paired t tests. Results The study included 500 patients (mean age, 54 years ± 19 [SD]; 261 female, 239 male), with 522 abnormalities visible on 241 radiographs. On average, for all readers, AI use resulted in an absolute increase in sensitivity of 26% (95% CI: 20, 32), 14% (95% CI: 11, 17), 12% (95% CI: 10, 14), 8.5% (95% CI: 6, 11), and 5.9% (95% CI: 4, 8) for pneumothorax, consolidation, nodule, pleural effusion, and mediastinal and hilar mass, respectively (P < .001). Specificity increased with AI assistance (3.9% [95% CI: 3.2, 4.6], 3.7% [95% CI: 3, 4.4], 2.9% [95% CI: 2.3, 3.5], and 2.1% [95% CI: 1.6, 2.6] for pleural effusion, mediastinal and hilar mass, consolidation, and nodule, respectively), except in the diagnosis of pneumothorax (-0.2%; 95% CI: -0.36, -0.04; P = .01). The mean reading time was 81 seconds without AI versus 56 seconds with AI (31% decrease, P < .001). Conclusion AI-assisted chest radiography interpretation resulted in absolute increases in sensitivity for all radiologists of various levels of expertise and reduced the reading times; specificity increased with AI, except in the diagnosis of pneumothorax. © RSNA, 2023 Supplemental material is available for this article.

  • Research Article
  • 10.63345/ijrmeet.org.v13.i4.10
Fine-Tuning LLMs for Personality Preservation in AI Assistants
  • Apr 1, 2025
  • International Journal of Research in Modern Engineering &amp; Emerging Technology
  • Shilesh Karunakaran + 1 more

The application of large language models (LLMs) in artificial intelligence (AI) assistants has drawn a lot of interest due to their ability to generate conversations that are close to human-like interaction and to provide contextually relevant responses. Nonetheless, the age-old problem is still that of preserving the personality and consistency of the AI system across different interactions. Despite the advances in conversational AI, most AI assistants lack the ability to maintain a consistent personality, resorting to generating robotic conversations or those that lack personality. This work aims to close the personality conservation gap in AI assistants through the exploration of methods to fine-tune LLMs for consistent personality preservation. Through data-driven methods, the research explores the role of user selection, context recall, and user-specific interaction history in shaping and preserving the personality of an AI assistant. Additionally, the research explores the application of emotional intelligence and adaptive learning algorithms to make the assistant persona more natural, dynamic, and user-relevant. This work introduces a new paradigm for LLM fine-tuning that is able to leverage such factors while allowing responsiveness, adaptability, and user appeal. The ultimate contribution of this research is to lay the foundation for the creation of AI assistants that can provide personalized experiences without compromising reliability or user enjoyment. The expectation is that findings will have far-reaching implications in customer service, personal assistance, and other areas where consistent and engaging AI personalities are the secret to successful user interaction.

  • Research Article
  • Cite Count Icon 7
  • 10.1148/ryai.230079
Assistive AI in Lung Cancer Screening: A Retrospective Multinational Study in the United States and Japan.
  • May 1, 2024
  • Radiology. Artificial intelligence
  • Atilla P Kiraly + 24 more

Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; P = .02) for the U.S. study and by 0.023 (0.93 to 0.96; P = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; P < .001) for the U.S. study and 6.7% (23% to 30%; P < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; P = .88) and Japan (98% to 100%; P > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. Keywords: Assistive Artificial Intelligence, Lung Cancer Screening, CT Supplemental material is available for this article. Published under a CC BY 4.0 license.

  • Research Article
  • Cite Count Icon 42
  • 10.1002/cam4.4261
Artificial intelligence-assisted colonoscopy: A prospective, multicenter, randomized controlled trial of polyp detection.
  • Sep 3, 2021
  • Cancer Medicine
  • Lei Xu + 12 more

BackgroundArtificial intelligence (AI) assistance has been considered as a promising way to improve colonoscopic polyp detection, but there are limited prospective studies on real‐time use of AI systems.MethodsWe conducted a prospective, multicenter, randomized controlled trial of patients undergoing colonoscopy at six centers. Eligible patients were randomly assigned to conventional colonoscopy (control group) or AI‐assisted colonoscopy (AI group). AI assistance was our newly developed AI system for real‐time colonoscopic polyp detection. Primary outcome is polyp detection rate (PDR). Secondary outcomes include polyps per positive patient (PPP), polyps per colonoscopy (PPC), and non‐first polyps per colonoscopy (PPC‐Plus).ResultsA total of 2352 patients were included in the final analysis. Compared with the control, AI group did not show significant increment in PDR (38.8% vs. 36.2%, p = 0.183), but its PPC‐Plus was significantly higher (0.5 vs. 0.4, p < 0.05). In addition, AI group detected more diminutive polyps (76.0% vs. 68.8%, p < 0.01) and flat polyps (5.9% vs. 3.3%, p < 0.05). The effects varied somewhat between centers. In further logistic regression analysis, AI assistance independently contributed to the increment of PDR, and the impact was more pronounced for male endoscopists, shorter insertion time but longer withdrawal time, and elderly patients with larger waist circumference.ConclusionThe intervention of AI plays a limited role in overall polyp detection, but increases detection of easily missed polyps; ChiCTR.org.cn number, ChiCTR1800015607.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/diagnostics14232689
Enhancing Radiologist Efficiency with AI: A Multi-Reader Multi-Case Study on Aortic Dissection Detection and Prioritization.
  • Nov 28, 2024
  • Diagnostics (Basel, Switzerland)
  • Martina Cotena + 10 more

Acute aortic dissection (AD) is a life-threatening condition in which early detection can significantly improve patient outcomes and survival. This study evaluates the clinical benefits of integrating a deep learning (DL)-based application for the automated detection and prioritization of AD on chest CT angiographies (CTAs) with a focus on the reduction in the scan-to-assessment time (STAT) and interpretation time (IT). This retrospective Multi-Reader Multi-Case (MRMC) study compared AD detection with and without artificial intelligence (AI) assistance. The ground truth was established by two U.S. board-certified radiologists, while three additional expert radiologists served as readers. Each reader assessed the same CTAs in two phases: assessment unaided by AI assistance (pre-AI arm) and, after a 1-month washout period, assessment aided by device outputs (post-AI arm). STAT and IT metrics were compared between the two arms. This study included 285 CTAs (95 per reader, per arm) with a mean patient age of 58.5 years ±14.7 (SD), of which 52% were male and 37% had a prevalence of AD. AI assistance significantly reduced the STAT for detecting 33 true positive AD cases from 15.84 min (95% CI: 13.37-18.31 min) without AI to 5.07 min (95% CI: 4.23-5.91 min) with AI, representing a 68% reduction (p < 0.01). The IT also reduced significantly from 21.22 s (95% CI: 19.87-22.58 s) without AI to 14.17 s (95% CI: 13.39-14.95 s) with AI (p < 0.05). The integration of a DL-based algorithm for AD detection on chest CTAs significantly reduces both the STAT and IT. By prioritizing urgent cases, the AI-assisted approach outperforms the standard First-In, First-Out (FIFO) workflow.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.jdent.2025.105868
Impact of artificial intelligence assistance on diagnosing periapical radiolucencies: A randomized controlled trial.
  • Sep 1, 2025
  • Journal of dentistry
  • Utku Pul + 3 more

Impact of artificial intelligence assistance on diagnosing periapical radiolucencies: A randomized controlled trial.

  • Research Article
  • Cite Count Icon 66
  • 10.1016/j.ijhcs.2022.102792
The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending
  • Feb 12, 2022
  • International Journal of Human-Computer Studies
  • Murat Dikmen + 1 more

Increasingly, artificial intelligence (AI) is being used to assist complex decision-making such as financial investing. However, there are concerns regarding the black-box nature of AI algorithms. The field of explainable AI (XAI) has emerged to address these concerns. XAI techniques can reveal how an AI decision is formed and can be used to understand and appropriately trust an AI system. However, XAI techniques still may not be human-centred and may not support human decision-making adequately. In this work, we explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI. We measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment. In a peer-to-peer lending simulator, non-expert participants made financial investments using an AI assistant. The presence or absence of domain knowledge was manipulated. The results showed that participants who had access to domain knowledge relied less on the AI assistant when the AI assistant was incorrect and indicated less trust in AI assistant. However, overall investing performance was not affected. These results suggest that providing domain knowledge can influence how non-expert users use AI and could be a powerful tool to help these users develop appropriate levels of trust and reliance.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon