The Current Landscape of Artificial Intelligence in Imaging for Transcatheter Aortic Valve Replacement

  • Abstract
  • Highlights & Summary
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

PurposeThis review explores the current landscape of AI applications in imaging for TAVR, emphasizing the potential and limitations of these tools for (1) automating the image analysis and reporting process, (2) improving procedural planning, and (3) offering additional insight into post-TAVR outcomes. Finally, the direction of future research necessary to bridge these tools towards clinical integration is discussed.Recent FindingsTranscatheter aortic valve replacement (TAVR) has become a pivotal treatment option for select patients with severe aortic stenosis, and its indication for use continues to broaden. Noninvasive imaging techniques such as CTA and MRA have become routine for patient selection, preprocedural planning, and predicting the risk of complications. As the current methods for pre-TAVR image analysis are labor-intensive and have significant inter-operator variability, experts are looking towards artificial intelligence (AI) as a potential solution.SummaryAI has the potential to significantly enhance the planning, execution, and post-procedural follow up of TAVR. While AI tools are promising, the irreplaceable value of nuanced clinical judgment by skilled physician teams must not be overlooked. With continued research, collaboration, and careful implementation, AI can become an integral part in imaging for TAVR, ultimately improving patient care and outcomes.

Similar Papers
  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ejmp.2021.05.008
Focus issue: Artificial intelligence in medical physics.
  • Mar 1, 2021
  • Physica Medica
  • F Zanca + 11 more

Focus issue: Artificial intelligence in medical physics.

  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Research Article
  • 10.1016/j.sleep.2025.108662
Artificial intelligence in imaging for obstructive sleep apnea: A comprehensive review.
  • Jan 1, 2026
  • Sleep medicine
  • Xiaoxuan Zhang + 3 more

Artificial intelligence in imaging for obstructive sleep apnea: A comprehensive review.

  • Front Matter
  • Cite Count Icon 1
  • 10.1053/j.jvca.2022.05.017
The AVATAR Trial for Severe Asymptomatic Aortic Stenosis: Wait or Operate?
  • May 18, 2022
  • Journal of Cardiothoracic and Vascular Anesthesia
  • Peter J Neuburger + 2 more

The AVATAR Trial for Severe Asymptomatic Aortic Stenosis: Wait or Operate?

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.ejmp.2021.03.015
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
  • Mar 1, 2021
  • Physica Medica
  • Nico Buls + 4 more

Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.

  • Research Article
  • Cite Count Icon 15
  • 10.1161/circulationaha.121.058598
Is Asymptomatic Severe Aortic Stenosis Still a Waiting Game?
  • Mar 22, 2022
  • Circulation
  • Graham S Hillis + 2 more

Is Asymptomatic Severe Aortic Stenosis Still a Waiting Game?

  • Front Matter
  • Cite Count Icon 11
  • 10.1016/j.jval.2021.12.009
The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned
  • Jan 31, 2022
  • Value in Health
  • Danielle Whicher + 1 more

The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned

  • Research Article
  • 10.34190/ecie.19.1.2468
Exploring the potential of AI to increase productivity in small marketing teams
  • Sep 20, 2024
  • European Conference on Innovation and Entrepreneurship
  • Aniko Szenftner + 2 more

Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.

  • Research Article
  • Cite Count Icon 115
  • 10.1161/01.cir.0000015343.76143.13
Evaluation and Management of Patients With Aortic Stenosis
  • Apr 16, 2002
  • Circulation
  • Blase A Carabello

Case presentation: A 66-year-old man is referred to a cardiologist for the evaluation of a heart murmur. The patient claims to be entirely asymptomatic, although his wife notes that he has decreased his physical activity over the past two years because he is “getting old.” At physical examination, his blood pressure was 120/70 mm Hg; pulse, 80 bpm; respiration, 13 breaths per minute; and temperature, 99.0°F. Cardiovascular examination revealed normal central venous pressure. His carotid upstrokes were reduced in volume and delayed in upstroke. Cardiac examination revealed a forceful sustained apical impulse in its normal position. There was a 3/6 late-peaking systolic ejection murmur heard at the right upper sternal border radiating to the neck. The rest of the physical examination was unremarkable. Echo-Doppler evaluation revealed an ejection fraction of 0.60, a left ventricular free wall thickness of 1.3 cm, and a peak transaortic flow velocity of 4.5 m/s. How should this patient be managed? Should he undergo aortic valve replacement now? Should he undergo longitudinal follow-up to monitor progression of his aortic stenosis? Over the past 40 years, diagnostic techniques, substitute cardiac valves, and valve implantation surgery have undergone continued improvement, reducing the risk of the valve replacement and enhancing its benefits. Thus, the risk-benefit analysis of valve surgery has tilted in favor of increasingly early intervention for valve disease. The following is a summary incorporating this concept into the current strategy for managing patients with aortic stenosis such as the one described above. The patient with severe aortic stenosis who presents with symptoms represents the most straightforward management strategy for the disease. Survival is nearly normal until the classic symptoms of angina, syncope, or dyspnea develop.1 However, only 50% of patients who present with angina survive 5 years, whereas 50% survival is 3 years for patients who …

  • Front Matter
  • Cite Count Icon 111
  • 10.1161/01.cir.0000074243.02378.80
Why angina in aortic stenosis with normal coronary arteriograms?
  • Jul 1, 2003
  • Circulation
  • K Lance Gould + 1 more

Hypertrophy is considered one of the major mechanisms of the myocardium for adapting to hemodynamic overload. More muscle mass provides more contractile elements for generating the extra work required by the overload. In pressure overload of aortic valve stenosis, concentric left ventricular hypertrophy (LVH) normalizes wall stress, a key determinant of ejection performance.1 Afterload is often expressed as wall stress (pressure×radius/thickness). As the pressure term in the numerator increases, it is offset by an increase in the thickness term of the denominator. In this way, the high systolic pressure required to drive blood through even a very stenotic aortic valve can be consistent with normal afterload and normal ejection fraction. See p 3170 Unfortunately, hypertrophy not only provides benefits but also has many pathological consequences. One of these is myocardial ischemia and the attendant angina reported by patients with aortic stenosis despite normal epicardial coronary arteries. The onset of angina greatly increases the risk of sudden death compared with the risk in asymptomatic patients with aortic valve stenosis.2,3 Angina occurs when myocardial oxygen demand exceeds supply. Demand is proportional to heart rate and wall stress, and the latter can be elevated in cases of aortic stenosis when hypertrophy is inadequate to normalize stress.1 After aortic valve replacement, there is marked regression of hypertrophy that may occur over the next several months to years,4 but angina is relieved immediately. Relief of angina immediately after surgery is probably due to the combination of sudden decreased oxygen demand after removal of pressure overload and increased oxygen supply of improved perfusion. However, there are remaining questions about the physiological mechanisms for reduced myocardial oxygen supply (coronary blood flow) in aortic stenosis and its improvement after relief of pressure overload. Specifically, what is it about critical aortic stenosis that is “critical” …

  • Research Article
  • 10.1136/bmjopen-2025-099921
The introduction and adoption of artificial intelligence in systematic literature reviews: a discrete choice experiment.
  • Oct 15, 2025
  • BMJ open
  • Seye Abogunrin + 6 more

Systematic literature reviews (SLRs) are essential for synthesising research evidence and guiding informed decision-making. However, SLRs require significant resources and substantial efforts in terms of workload. The introduction of artificial intelligence (AI) tools can reduce this workload. This study aims to investigate the preferences in SLR screening, focusing on trade-offs related to tool attributes. A discrete choice experiment (DCE) was performed in which participants completed 13 or 14 choice tasks featuring AI tools with varying attributes. Data were collected via an online survey, where participants provided background on their education and experience. Professionals who have published SLRs registered on Pubmed, or who were affiliated with a recent Health Economics and Outcomes Research conference were included as participants. The use of a hypothetical AI tool in SLRs with different attributes was considered by the participants. Key attributes for AI tools were identified through a literature review and expert consultations. These attributes included the AI tool's role in screening, required user proficiency, sensitivity, workload reduction and the investment needed for training. The participants' adoption of the AI tool, that is, the likelihood of preferring the AI tool in the choice experiment, considering different configurations of attribute levels, as captured through the DCE choice tasks. Statistical analysis was performed using conditional multinomial logit. An additional analysis was performed by including the demographic characteristics (such as education, experience with SLR publication and familiarity with AI) as interaction variables. The study received responses from 187 participants with diverse experience in performing SLRs and AI use. The familiarity with AI was generally low, with 55.6% of participants being (very) unfamiliar with AI. In contrast, intermediate proficiency in AI tools is positively associated with adoption (p=0.030). Similarly, workload reduction is also strongly linked to adoption (p<0.001). Interestingly, if expert proficiency is needed for the AI, authors with more scientific experience in their profession are less likely to adopt AI (p=0.009). However, more experience specifically with SLR publications increases AI adoption likelihood (p=0.001). The findings suggest that workload reduction is not the only consideration for SLR reviewers when using AI tools. The key to AI adoption in SLRs is creating reliable, workload-reducing tools that assist rather than replace human reviewers, with moderate proficiency requirements and high sensitivity.

  • Research Article
  • Cite Count Icon 6
  • 10.1108/lhtn-08-2024-0131
Artificial intelligence (AI) tools for academic research
  • Sep 17, 2024
  • Library Hi Tech News
  • Adetoun A Oyelude

PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.

  • Research Article
  • 10.14444/8778
Artificial Intelligence: The Prevalent Coauthor Among Early-Career Surgeons.
  • Jul 14, 2025
  • International journal of spine surgery
  • Franziska C S Altorfer + 3 more

Artificial Intelligence: The Prevalent Coauthor Among Early-Career Surgeons.

  • Research Article
  • 10.2196/76130
The Phases of Living Evidence Synthesis Using AI: Living Evidence Synthesis (Version 1)
  • Jan 27, 2026
  • Journal of Medical Internet Research
  • Xuping Song + 14 more

BackgroundLiving evidence (LE) synthesis refers to the method of continuously updating systematic evidence reviews to incorporate new evidence. It has emerged to address the limitations of the traditional systematic review process, particularly the absence of or delays in publication updates. The emergence of COVID-19 accelerated the progress in the field of LE synthesis, and currently, the applications of artificial intelligence (AI) in LE synthesis are expanding rapidly. However, in which phases of LE synthesis should AI be used remains an unanswered question.ObjectiveThis study aims to (1) document the phases of LE synthesis where AI is used and (2) investigate whether AI improves the efficiency, accuracy, or utility of LE synthesis.MethodsWe searched Web of Science, PubMed, the Cochrane Library, Epistemonikos, the Campbell Library, IEEE Xplore, medRxiv, COVID-19 Evidence Network to support Decision-making, and McMaster Health Forum. We used Covidence to facilitate the monthly screening and extraction processes to maintain the LE synthesis process. Studies that used or developed AI or semiautomated tools in the phases of LE synthesis were included.ResultsA total of 24 studies were included, including 17 on LE syntheses, with 4 involving tool development, and 7 on living meta-analyses, with 3 involving tool development. First, a total of 34 AI or semiautomated tools were involved, comprising 12 AI tools and 22 semiautomated tools. The most frequently used AI or semiautomated tools were machine learning classifiers (n=5) and the Living Interactive Evidence synthesis platform (n=3). Second, 20 AI or semiautomated tools were used for the data extraction or collection and risk of bias assessment phase, and only 1 AI tool was used for the publication update phase. Third, 3 studies demonstrated the improvement in efficiency achieved based on time, workload, and conflict rate metrics. Nine studies applied AI or semiautomated tools in LE synthesis, obtaining a mean recall rate of 96.24%, and 6 studies achieved a mean F1-score of 92.17%. Additionally, 8 studies reported precision values ranging from 0.2% to 100%.ConclusionsAI and semiautomated tools primarily facilitate data extraction or collection and risk of bias assessment. The use of AI or semiautomated tools in LE synthesis improves efficiency, leading to high accuracy, recall, and F1-scores, while precision varies across tools.

  • Supplementary Content
  • Cite Count Icon 4
  • 10.3390/jpm15070302
Artificial Intelligence in Risk Stratification and Outcome Prediction for Transcatheter Aortic Valve Replacement: A Systematic Review and Meta-Analysis
  • Jul 11, 2025
  • Journal of Personalized Medicine
  • Shayan Shojaei + 13 more

Background/Objectives: Transcatheter aortic valve replacement (TAVR) has been introduced as an optimal treatment for patients with severe aortic stenosis, offering a minimally invasive alternative to surgical aortic valve replacement. Predicting these outcomes following TAVR is crucial. Artificial intelligence (AI) has emerged as a promising tool for improving post-TAVR outcome prediction. In this systematic review and meta-analysis, we aim to summarize the current evidence on utilizing AI in predicting post-TAVR outcomes. Methods: A comprehensive search was conducted to evaluate the studies focused on TAVR that applied AI methods for risk stratification. We assessed various ML algorithms, including random forests, neural networks, extreme gradient boosting, and support vector machines. Model performance metrics—recall, area under the curve (AUC), and accuracy—were collected with 95% confidence intervals (CIs). A random-effects meta-analysis was conducted to pool effect estimates. Results: We included 43 studies evaluating 366,269 patients (mean age 80 ± 8.25; 52.9% men) following TAVR. Meta-analyses for AI model performances demonstrated the following results: all-cause mortality (AUC = 0.78 (0.74–0.82), accuracy = 0.81 (0.69–0.89), and recall = 0.90 (0.70–0.97); permanent pacemaker implantation or new left bundle branch block (AUC = 0.75 (0.68–0.82), accuracy = 0.73 (0.59–0.84), and recall = 0.87 (0.50–0.98)); valve-related dysfunction (AUC = 0.73 (0.62–0.84), accuracy = 0.79 (0.57–0.91), and recall = 0.54 (0.26–0.80)); and major adverse cardiovascular events (AUC = 0.79 (0.67–0.92)). Subgroup analyses based on the model development approaches indicated that models incorporating baseline clinical data, imaging, and biomarker information enhanced predictive performance. Conclusions: AI-based risk prediction for TAVR complications has demonstrated promising performance. However, it is necessary to evaluate the efficiency of the aforementioned models in external validation datasets.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.