The Impact of Artificial Intelligence on X-Ray Interpretation and Diagnostic Accuracy
This paper reviews the transformative impact of Artificial Intelligence (AI) on X-ray interpretation and diagnostic accuracy in medical diagnostics. AI, particularly through deep learning models like Convolutional Neural Networks (CNNs), addresses cognitive and systemic bottlenecks in human-based analysis. Key applications include Computer-Aided Detection (CAD) systems and AI-driven workflow optimization tools. AI models often achieve diagnostic accuracy comparable or superior to human radiologists, with improvements in sensitivity and specificity for specific tasks, particularly in mammography screening. However, significant limitations persist, including false positives, lack of generalizability across different clinical settings and patient populations, and the "black box" nature of many algorithms. The paper critically examines the ethical considerations of deploying AI in clinical practice, focusing on algorithmic bias, data privacy, and accountability frameworks. The future of radiology lies in a collaborative human-AI paradigm, where AI augments radiologist capabilities while clinicians retain responsibility for complex interpretation, contextual understanding, and patient care. Successful and ethical integration of AI into routine radiography requires continuous validation against strong clinical ground truths, transparent regulatory oversight, and a sustained commitment to interdisciplinary research.
- Research Article
8
- 10.1148/ryai.2021210104
- Jul 1, 2021
- Radiology. Artificial intelligence
Clinical Validation Is the Key to Adopting AI in Clinical Practice.
- Research Article
34
- 10.1053/j.gastro.2022.10.021
- Nov 1, 2022
- Gastroenterology
Comparative Performance of Artificial Intelligence Optical Diagnosis Systems for Leaving in Situ Colorectal Polyps
- Research Article
121
- 10.1016/j.jacr.2021.04.002
- Apr 20, 2021
- Journal of the American College of Radiology
2020 ACR Data Science Institute Artificial Intelligence Survey.
- Research Article
11
- 10.1093/jamia/ocae165
- Jun 28, 2024
- Journal of the American Medical Informatics Association : JAMIA
Artificial intelligence (AI) models trained using medical images for clinical tasks often exhibit bias in the form of subgroup performance disparities. However, since not all sources of bias in real-world medical imaging data are easily identifiable, it is challenging to comprehensively assess their impacts. In this article, we introduce an analysis framework for systematically and objectively investigating the impact of biases in medical images on AI models. Our framework utilizes synthetic neuroimages with known disease effects and sources of bias. We evaluated the impact of bias effects and the efficacy of 3 bias mitigation strategies in counterfactual data scenarios on a convolutional neural network (CNN) classifier. The analysis revealed that training a CNN model on the datasets containing bias effects resulted in expected subgroup performance disparities. Moreover, reweighing was the most successful bias mitigation strategy for this setup. Finally, we demonstrated that explainable AI methods can aid in investigating the manifestation of bias in the model using this framework. The value of this framework is showcased in our findings on the impact of bias scenarios and efficacy of bias mitigation in a deep learning model pipeline. This systematic analysis can be easily expanded to conduct further controlled in silico trials in other investigations of bias in medical imaging AI. Our novel methodology for objectively studying bias in medical imaging AI can help support the development of clinical decision-support tools that are robust and responsible.
- Research Article
36
- 10.1148/ryai.2020200088
- May 1, 2020
- Radiology: Artificial Intelligence
Is It Time to Get Rid of Black Boxes and Cultivate Trust in AI?
- Research Article
- 10.1097/naq.0000000000000710
- Sep 18, 2025
- Nursing administration quarterly
This study explored nurses' perspectives on the adoption and utilization of artificial intelligence (AI) in clinical practice within a large university-affiliated health system in the southeastern United States. Through a survey enriched by open-ended questions, we captured the unique concerns and suggestions of nursing professionals regarding the deployment of AI technologies in a range of clinical settings. The majority of nurses have limited exposure to and experience with generative and predictive AI tools. In addition, they have concerns about the availability of related training opportunities, AI process integration, and ethical implications of AI implementation. There are critical workforce development needs and substantial opportunities for enhanced training to incorporate both ethical considerations and technical skills. This research illuminates the perspective and experience of nurses using AI. Specifically, it provides insights into the nursing workforce's readiness to adopt and utilize AI in clinical practice. This research also informs the integration of AI-focused curriculum and professional development for nurses. Specifically, more structured training is needed for nurses to use AI responsibly. Nurse administrators should be aware of the hesitations and concerns of this large population, as nurses are ultimately the front-line end users.
- Discussion
1
- 10.1016/s2589-7500(21)00255-7
- Dec 21, 2021
- The Lancet Digital Health
The study by Jarrel Seah and colleagues,1Seah JCY Tang CHM Buchlak QD et al.Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study.Lancet Digit Health. 2021; 3: 496-506Summary Full Text Full Text PDF PubMed Scopus (11) Google Scholar published in The Lancet Digital Health, shows that radiologists’ performance improved when assisted by a comprehensive chest x-ray deep-learning model. Specifically, 821 681 images (284 649 patients) with 127 chest x-ray findings were trained through EfficientNet, and a deep-learning model was used to assist diagnoses made by 20 experienced radiologists. This deep-learning model is a breakthrough as a support system for radiologists, suggesting synergistic improvements from cooperation between radiologists and artificial intelligence in clinical practice. However, a key issue in the study design should be addressed: the deep-learning model is at a disadvantage compared with a human. Specifically, the training dataset of 520 014 cases was labelled by radiologists using chest x-ray images and clinical reports, and the test dataset of 2568 cases was also labelled by radiologists using anonymised clinical information, past chest x-ray images, and relevant reports on findings from chest CTs. The deep-learning model thus has no opportunity to train using the characteristic features of lesions in consideration of clinical information from chest CT. Chest CT is an imaging method providing three-dimensional data (axial, sagittal, and coronal) that enhances anatomical details of the lung parenchyma, and contributes more detailed information than conventional x-ray, facilitating more precise diagnosis by radiologists. A deep-learning model that can benefit from the interpretation and experience of radiologists is needed. We propose a transfer-learning strategy that transfers characteristic features such as morphology and distribution of lung cancer from CT images to x-ray images. Transfer learning is a method inspired by the human capability to transfer knowledge across domains, and the diagnostic ability of a deep-learning model would improve by shifting information among methods.2Lotter W Diab AR Haslam B et al.Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach.Nat Med. 2021; 27: 244-249Crossref PubMed Scopus (44) Google Scholar Furthermore, the diagnostic decisions of radiologists are structured hierarchically, and the initial diagnosis has fewer potential interpretations than diagnosis by chest CT images.3An G Akiba M Omodaka K Nakazawa T Yokota H Hierarchical deep learning models using transfer learning for disease detection and classification based on small number of medical images.Sci Rep. 2021; 114250Crossref PubMed Scopus (9) Google Scholar A clinical setting that considers the diagnostic process from chest x-ray to CT examination should empower a deep-learning model with increased clinical relevance, which could help radiologists reach a diagnosis through considering additional information at first presentation of chest x-rays.4Larici AR Cicchetti G Marano R et al.Multimodality imaging of COVID-19 pneumonia: from diagnosis to follow-up. A comprehensive review.Eur J Radiol. 2020; 131109217Summary Full Text Full Text PDF PubMed Scopus (31) Google Scholar The generalisability of the model in different geographical settings should be explored, since deep learning is a promising technology that can perform quantitative evaluations and share medical resources globally.5LeCun Y Bengio Y Hinton G Deep learning.Nature. 2015; 521: 436-444Crossref PubMed Scopus (36882) Google Scholar The transfer-learning strategy offers the possibility of resolving the uneven distribution of medical resources, including imaging methods, and should contribute to bias mitigation. This deep-learning model of comprehensive chest x-rays is a breakthrough that could accelerate diagnosis by radiologists. We declare no competing interests. Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase studyThis study shows the potential of a comprehensive deep-learning model to improve chest x-ray interpretation across a large breadth of clinical practice. Full-Text PDF Open Access
- Research Article
- 10.1093/jbi/wbaf027
- May 30, 2025
- Journal of breast imaging
Artificial intelligence (AI) in breast imaging has garnered significant attention given the numerous reports of improved efficiency, accuracy, and the potential to bridge the gap of expanded volume in the face of limited physician resources. While AI models are developed with specific data points, on specific equipment, and in specific populations, the real-world clinical environment is dynamic, and patient populations are diverse, which can impact generalizability and widespread adoption of AI in clinical practice. Implementation of AI models into clinical practice requires focused attention on the potential of AI bias impacting outcomes. The following review presents the concept, sources, and types of AI bias to be considered when implementing AI models and offers suggestions on strategies to mitigate AI bias in practice.
- Research Article
15
- 10.1213/ane.0000000000006752
- Dec 6, 2023
- Anesthesia and analgesia
This study explored physician anesthesiologists' knowledge, exposure, and perceptions of artificial intelligence (AI) and their associations with attitudes and expectations regarding its use in clinical practice. The findings highlight the importance of understanding anesthesiologists' perspectives for the successful integration of AI into anesthesiology, as AI has the potential to revolutionize the field. A cross-sectional survey of 27,056 US physician anesthesiologists was conducted to assess their knowledge, perceptions, and expectations regarding the use of AI in clinical practice. The primary outcome measured was attitude toward the use of AI in clinical practice, with scores of 4 or 5 on a 5-point Likert scale indicating positive attitudes. The anticipated impact of AI on various aspects of professional work was measured using a 3-point Likert scale. Logistic regression was used to explore the relationship between participant responses and attitudes toward the use of AI in clinical practice. A 2021 survey of 27,056 US physician anesthesiologists received 1086 responses (4% response rate). Most respondents were male (71%), active clinicians (93%) under 45 (34%). A majority of anesthesiologists (61%) had some knowledge of AI and 48% had a positive attitude toward using AI in clinical practice. While most respondents believed that AI can improve health care efficiency (79%), timeliness (75%), and effectiveness (69%), they are concerned that its integration in anesthesiology could lead to a decreased demand for anesthesiologists (45%) and decreased earnings (45%). Within a decade, respondents expected AI would outperform them in predicting adverse perioperative events (83%), formulating pain management plans (67%), and conducting airway exams (45%). The absence of algorithmic transparency (60%), an ambiguous environment regarding malpractice (47%), and the possibility of medical errors (47%) were cited as significant barriers to the use of AI in clinical practice. Respondents indicated that their motivation to use AI in clinical practice stemmed from its potential to enhance patient outcomes (81%), lower health care expenditures (54%), reduce bias (55%), and boost productivity (53%). Variables associated with positive attitudes toward AI use in clinical practice included male gender (odds ratio [OR], 1.7; P < .001), 20+ years of experience (OR, 1.8; P < .01), higher AI knowledge (OR, 2.3; P = .01), and greater AI openness (OR, 10.6; P < .01). Anxiety about future earnings was associated with negative attitudes toward AI use in clinical practice (OR, 0.54; P < .01). Understanding anesthesiologists' perspectives on AI is essential for the effective integration of AI into anesthesiology, as AI has the potential to revolutionize the field.
- Research Article
10
- 10.1097/sla.0000000000005319
- Nov 23, 2021
- Annals of Surgery
Artificial Intelligence for Computer Vision in Surgery: A Call for Developing Reporting Guidelines.
- Research Article
- 10.32628/cseit25113370
- Jun 15, 2025
- International Journal of Scientific Research in Computer Science, Engineering and Information Technology
Artificial Intelligence (AI) is rapidly transforming healthcare by improving diagnostic accuracy, optimizing workflows, and accelerating research. In This review key aspects of AI applications in diagnostic imaging, predictive modeling, clinical decision support, robotic surgery, and drug discovery. For example, convolutional neural networks (CNNs) have achieved over 89% accuracy in interpreting chest radiographs, while generative models like GENTRL have identified novel drug compounds in under two months. AI tools now rival or surpass human experts in early sepsis detection, skin cancer classification, and stroke risk prediction using electronic health records (EHRs). Natural language processing (NLP) also enables the extraction of actionable insights from unstructured clinical texts, aiding personalized care. Despite these advancements, ethical and practical concerns persist. Issues such as algorithmic bias, lack of transparency, and data privacy risks challenge the safe integration of AI in clinical practice. Models trained on biased datasets may worsen health disparities, and the opaque nature of many AI systems limits clinician trust, underscoring the need for explainable AI (XAI). This review synthesizes current literature to assess AI’s strengths, limitations, and future potential in healthcare. It calls for robust validation, interdisciplinary collaboration, inclusive data practices, and the creation of ethical frameworks to guide AI deployment. When responsibly implemented, AI has the potential to enhance clinical decision-making, reduce diagnostic errors, and improve health outcomes globally.
- Peer Review Report
- 10.7554/elife.83662.sa1
- Dec 12, 2022
Decision letter: Development and evaluation of a live birth prediction model for evaluating human blastocysts from a retrospective study
- Peer Review Report
- 10.7554/elife.83662.sa0
- Dec 12, 2022
Editor's evaluation: Development and evaluation of a live birth prediction model for evaluating human blastocysts from a retrospective study
- Research Article
44
- 10.1007/s00535-022-01849-9
- Jan 1, 2022
- Journal of Gastroenterology
BackgroundUltrasonography (US) is widely used for the diagnosis of liver tumors. However, the accuracy of the diagnosis largely depends on the visual perception of humans. Hence, we aimed to construct artificial intelligence (AI) models for the diagnosis of liver tumors in US.MethodsWe constructed three AI models based on still B-mode images: model-1 using 24,675 images, model-2 using 57,145 images, and model-3 using 70,950 images. A convolutional neural network was used to train the US images. The four-class liver tumor discrimination by AI, namely, cysts, hemangiomas, hepatocellular carcinoma, and metastatic tumors, was examined. The accuracy of the AI diagnosis was evaluated using tenfold cross-validation. The diagnostic performances of the AI models and human experts were also compared using an independent test cohort of video images.ResultsThe diagnostic accuracies of model-1, model-2, and model-3 in the four tumor types are 86.8%, 91.0%, and 91.1%, whereas those for malignant tumor are 91.3%, 94.3%, and 94.3%, respectively. In the independent comparison of the AIs and physicians, the percentages of correct diagnoses (accuracies) by the AIs are 80.0%, 81.8%, and 89.1% in model-1, model-2, and model-3, respectively. Meanwhile, the median percentages of correct diagnoses are 67.3% (range 63.6%–69.1%) and 47.3% (45.5%–47.3%) by human experts and non-experts, respectively.Conclusion The performance of the AI models surpassed that of human experts in the four-class discrimination and benign and malignant discrimination of liver tumors. Thus, the AI models can help prevent human errors in US diagnosis.
- Research Article
11
- 10.1136/flgastro-2021-101994
- Jan 17, 2022
- Frontline Gastroenterology
Background and aimsWith the potential integration of artificial intelligence (AI) into clinical practice, it is essential to understand end users’ perception of this novel technology. The aim of this study,...
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0890
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0955
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0960
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0918
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0892
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0851
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0860
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0878
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0895
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- New
- Research Article
- 10.30574/wjbphs.2025.24.1.0917
- Oct 31, 2025
- World Journal of Biology Pharmacy and Health Sciences
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.