Introduction. Artificial intelligence (AI) is an effective tool for automating routine tasks in radiology. The diagnostic accuracy of AI in detecting various pathologies on medical images has generated considerable interest in the scientific community: the number of studies and meta-analyses has been constantly growing. The abundance of published evidence and the diversity of outcomes necessitate the need to systematize the available publications. The aim of this paper is to conduct an umbrella systematic review of contemporary meta-analyses on the use of AI in radiology.Materials and methods. PubMed was searched for studies published in the English language. Thirty-eight systematic reviews with meta-analyses published between 2021 and 2023 were selected for full-text analysis. The extracted data included the goal, study design, imaging modality, sample size, quality assessment of the included studies, AI diagnostic accuracy estimates, reference method parameters, and clinical efficacy metrics of AI implementation. The methodological quality of included systematic reviews was assessed using the AMSTAR-2 tool.Results. Nearly half (47%) of the included meta-analyses focused on the diagnosis, staging and segmentation of malignancies. Four meta-analyses were related to detection of maxillofacial structures in dentistry, while another four meta-analyses addressed the diagnosis of brain lesions. The diagnosis of COVID-19 and the diagnosis of bone fractures were each covered in three meta-analyses. One meta-analysis was reviewed for each of the following fields: colorectal polyps, pneumothorax, pulmonary embolism, osteoporosis, aneurysms, multiple sclerosis, acute cerebrovascular accident, intracranial hemorrhage, burns, and the risk of intrauterine growth restriction. Thirty-five (92%) meta-analyses assessed the risk of bias. Twenty-eight (80%) meta-analyses utilized QUADAS-2 to assess the risk of bias. 14 out of 28 papers reported low risk of bias (50%); 4 (14%) – moderate; 10 (36%) – high. The major risks were associated with samples that were unbalanced in terms of size and composition, a lack of details about the methods, a low number of prospective studies, and a lack of external validation of the outcomes. The overall results indicate that the diagnostic accuracy of AI is comparable to or even greater than that of radiologists. The mean sensitivity, specificity and area under the ROC curve for AI and radiologists were 85.2%, 89.5%, 93.5% and 84.4%, 90.0%, 92.8%, respectively. However, many studies that compared the diagnostic accuracy of AI and radiologists lack the data on the number and experience of the latter. Only one paper presented results of implementing AI into routine clinical diagnosis.Discussion. AI is capable of reducing the turnaround time for non-urgent examinations. When used to verify the primary interpretation, AI was effective in detecting false-negative results from radiologists. However, the efficacy of detecting false-positive results was inadequate. Our assessment of the quality of systematic reviews with AMSTAR-2 show that the methods of searching, selecting and analyzing literature must be improved and brought to a common standard. The development of a specialized tool for assessing the quality of systematic reviews in the AI implementation is also necessary. Due to high diagnostic accuracy, AI is currently considered a promising tool for optimizing the turnaround time. However, more evidence is needed to study the AI outcomes in routine clinical practice. Furthermore, it is necessary to standardize and improve the quality of research methodology.