Safety and Trust in Artificial Intelligence with Abstract Interpretation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Deep neural networks (DNNs) now dominate the AI landscape and have shown impressive performance in diverse application domains, including vision, natural language processing (NLP), and healthcare. However, both public and private entities have been increasingly expressing significant concern about the potential of state-of-the-art AI models to cause societal and financial harm. This lack of trust arises from their black-box construction and vulnerability against natural and adversarial noise. As a result, researchers have spent considerable time developing automated methods for building safe and trustworthy DNNs. Abstract interpretation has emerged as the most popular framework for efficiently analyzing realistic DNNs among the various approaches. However, due to fundamental differences in the computational structure (e.g., high nonlinearity) of DNNs compared to traditional programs, developing efficient DNN analyzers has required tackling significantly different research challenges than encountered for programs. In this monograph, we describe state-of-the-art approaches based on abstract interpretation for analyzing DNNs. These approaches include the design of new abstract domains, synthesis of novel abstract transformers, abstraction refinement, and incremental analysis. We will discuss how the analysis results can be used to: (i) formally check whether a trained DNN satisfies desired output and gradient-based safety properties, (ii) guide the model updates during training towards satisfying safety properties, and (iii) reliably explain and interpret the black-box workings of DNNs.

Similar Papers
  • Research Article
  • Cite Count Icon 52
  • 10.3390/bs12050127
Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort
  • Apr 27, 2022
  • Behavioral Sciences
  • Liangru Yu + 1 more

The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans’ trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans’ trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans’ trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees’ perceived transparency, employees’ perceived effectiveness of AI, and employees’ discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees’ trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans’ trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees’ trust in AI, so that employees can better collaborate with AI.

  • Research Article
  • Cite Count Icon 16
  • 10.2144/fsoa-2022-0010
Artificial intelligence in interdisciplinary life science and drug discovery research.
  • Mar 8, 2022
  • Future science OA
  • Jürgen Bajorath

Artificial intelligence in interdisciplinary life science and drug discovery research.

  • Research Article
  • Cite Count Icon 7
  • 10.1017/pen.2022.5
Trust toward humans and trust toward artificial intelligence are not associated: Initial insights from self-report and neurostructural brain imaging.
  • Jan 1, 2023
  • Personality neuroscience
  • Christian Montag + 8 more

The present study examines whether self-reported trust in humans and self-reported trust in [(different) products with built-in] artificial intelligence (AI) are associated with one another and with brain structure. We sampled 90 healthy participants who provided self-reported trust in humans and AI and underwent brain structural magnetic resonance imaging assessment. We found that trust in humans, as measured by the trust facet of the personality inventory NEO-PI-R, and trust in AI products, as measured by items assessing attitudes toward AI and by a composite score based on items assessing trust toward products with in-built AI, were not significantly correlated. We also used a concomitant dimensional neuroimaging approach employing a data-driven source-based morphometry (SBM) analysis of gray-matter-density to investigate neurostructural associations with each trust domain. We found that trust in humans was negatively (and significantly) correlated with an SBM component encompassing striato-thalamic and prefrontal regions. We did not observe significant brain structural association with trust in AI. The present findings provide evidence that trust in humans and trust in AI seem to be dissociable constructs. While the personal disposition to trust in humans might be "hardwired" to the brain's neurostructural architecture (at least from an individual differences perspective), a corresponding significant link for the disposition to trust AI was not observed. These findings represent an initial step toward elucidating how different forms of trust might be processed on the behavioral and brain level.

  • Research Article
  • 10.1155/hbe2/4084384
Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI‐Based Technologies
  • Jan 1, 2025
  • Human Behavior and Emerging Technologies
  • Sulfikar Amir + 4 more

What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI‐based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross‐country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI‐based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 23
  • 10.3390/en14071942
Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland
  • Apr 1, 2021
  • Energies
  • Justyna Łapińska + 4 more

The use of artificial intelligence (AI) in companies is advancing rapidly. Consequently, multidisciplinary research on AI in business has developed dramatically during the last decade, moving from the focus on technological objectives towards an interest in human users’ perspective. In this article, we investigate the notion of employees’ trust in AI at the workplace (in the company), following a human-centered approach that considers AI integration in business from the employees’ perspective, taking into account the elements that facilitate human trust in AI. While employees’ trust in AI at the workplace seems critical, so far, few studies have systematically investigated its determinants. Therefore, this study is an attempt to fill the existing research gap. The research objective of the article is to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). A quantitative study conducted on a sample of 428 employees from companies of the energy and chemical industries in Poland allowed the hypotheses to be verified. The hypotheses were tested using structural equation modeling (SEM). The results indicate the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.

  • Research Article
  • Cite Count Icon 3
  • 10.2196/71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop
  • Jun 2, 2025
  • Journal of Medical Internet Research
  • Melanie Goisauf + 10 more

Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Preprint Article
  • 10.2196/preprints.71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop (Preprint)
  • Jan 13, 2025
  • Melanie Goisauf + 10 more

UNSTRUCTURED Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Conference Article
  • Cite Count Icon 77
  • 10.1145/3531146.3533202
How Explainability Contributes to Trust in AI
  • Jun 20, 2022
  • Andrea Ferrario + 1 more

We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. We discuss the latter by referencing an account of trust, called “trust as anti-monitoring,” that different authors contributed developing. We focus our analysis on the case of medical AI systems, noting that our proposal is compatible with internalist and externalist justifications of trustworthiness of medical AI and recent accounts of warranted contractual trust. We propose that “explainability fosters trust in AI” if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. We argue that our proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, we apply our account to user’s trust in AI, where, we argue, explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human, we argue, it is possible for explainability to contribute to trust. Our account can explain the apparent paradox that in order to trust AI, we must trust AI users not to trust AI completely. Summing up, we can explain how explainability contributes to justified trust in AI, without leaving a reliabilist framework, but only by redefining the trusted entity as an AI-user dyad.

  • Research Article
  • Cite Count Icon 21
  • 10.1038/s41598-021-92904-7
An empirical investigation of trust in AI in a Chinese petrochemical enterprise based on institutional theory
  • Jun 30, 2021
  • Scientific Reports
  • Jia Li + 3 more

Despite its considerable potential in the manufacturing industry, the application of artificial intelligence (AI) in the industry still faces the challenge of insufficient trust. Since AI is a black box with operations that ordinary users have difficulty understanding, users in organizations rely on institutional cues to make decisions about their trust in AI. Therefore, this study investigates trust in AI in the manufacturing industry from an institutional perspective. We identify three institutional dimensions from institutional theory and conceptualize them as management commitment (regulative dimension at the organizational level), authoritarian leadership (normative dimension at the group level), and trust in the AI promoter (cognitive dimension at the individual level). We hypothesize that all three institutional dimensions have positive effects on trust in AI. In addition, we propose hypotheses regarding the moderating effects of AI self-efficacy on these three institutional dimensions. A survey was conducted in a large petrochemical enterprise in eastern China just after the company had launched an AI-based diagnostics system for fault detection and isolation in process equipment service. The results indicate that management commitment, authoritarian leadership, and trust in the AI promoter are all positively related to trust in AI. Moreover, the effect of management commitment and trust in the AI promoter are strengthened when users have high AI self-efficacy. The findings of this study provide suggestions for academics and managers with respect to promoting users’ trust in AI in the manufacturing industry.

  • Research Article
  • Cite Count Icon 4
  • 10.1037/xge0001696
When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.
  • Feb 1, 2025
  • Journal of experimental psychology. General
  • Fanny Lalot + 1 more

The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

  • Research Article
  • Cite Count Icon 68
  • 10.1016/j.ijhcs.2022.102792
The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending
  • Feb 12, 2022
  • International Journal of Human-Computer Studies
  • Murat Dikmen + 1 more

Increasingly, artificial intelligence (AI) is being used to assist complex decision-making such as financial investing. However, there are concerns regarding the black-box nature of AI algorithms. The field of explainable AI (XAI) has emerged to address these concerns. XAI techniques can reveal how an AI decision is formed and can be used to understand and appropriately trust an AI system. However, XAI techniques still may not be human-centred and may not support human decision-making adequately. In this work, we explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI. We measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment. In a peer-to-peer lending simulator, non-expert participants made financial investments using an AI assistant. The presence or absence of domain knowledge was manipulated. The results showed that participants who had access to domain knowledge relied less on the AI assistant when the AI assistant was incorrect and indicated less trust in AI assistant. However, overall investing performance was not affected. These results suggest that providing domain knowledge can influence how non-expert users use AI and could be a powerful tool to help these users develop appropriate levels of trust and reliance.

  • Research Article
  • Cite Count Icon 5
  • 10.1145/3686963
"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AI
  • Nov 7, 2024
  • Proceedings of the ACM on Human-Computer Interaction
  • Tina B Lassiter + 1 more

Artificial Intelligence (AI) auditing is a relatively new area of work. Currently, there is a lack of uniform standards and regulation. As a result, the AI auditing ecosystem is very diverse, and AI auditing professionals use a variety of different auditing methods. So far, little is known about how AI auditors approach the concept of trust in AI through AI audits, in particular regarding the trust of users. This paper reports findings from interviews with 19 AI auditing stakeholders to understand how AI auditing professionals seek to create calibrated trust in AI tools and AI audits. Themes identified included the AI auditing ecosystem, participants' experiences with AI auditing, and trust in AI audits and AI. The paper adds to the existing research on trust in AI and trustworthiness in AI by adding perspectives of key stakeholders regarding trust in AI Audits by users as an essential and currently less explored part of the trust in AI research. This paper shows how information asymmetry in respect to AI audits can decrease the value of audits for users and consequently their trust in AI systems. Study participants suggest key elements for rebuilding trust and suggest recommendations for the AI auditing industry, such as monitoring of auditors and effective communication about AI audits.

  • Research Article
  • 10.1111/nicc.70157
When Machines Decide: Exploring How Trust in AI Shapes the Relationship Between Clinical Decision Support Systems and Nurses' Decision Regret: A Cross-Sectional Study.
  • Aug 26, 2025
  • Nursing in critical care
  • Nadia Hassan Ali Awad + 5 more

Artificial intelligence (AI)-based Clinical Decision Support Systems (AI-CDSS) are increasingly implemented in intensive care settings to support nurses in complex, time-sensitive decisions, aiming to improve accuracy, efficiency and patient outcomes. However, their use raises concerns about emotional consequences, particularly decision regret, which may arise when clinical judgement or outcomes are unfavourable. Trust in AI may play a key role in shaping nurses' responses to AI-guided decisions. To examine the relationship between nurses' reliance on AI-CDSS, decision regret and trust in AI, with a focus on the moderating role of trust in the association between AI-CDSS reliance and decision regret. A cross-sectional correlational design was used. A convenience sample of 250 intensive care unit (ICU) nurses completed validated instruments: the Healthcare Systems Usability Scale (HSUS) for AI-CDSS reliance, the Decision Regret Scale (DRS) and the Trust in AI Scale. Descriptive statistics, Pearson's correlations, multiple linear regression and moderation analysis were conducted. A total of 250 ICU nurses participated in the study out of 400 approached, yielding a response rate of 62.5%. Nurses reported moderate levels of AI-CDSS reliance (M = 78.6, SD = 12.4), decision regret (M = 38.5, SD = 14.8) and trust in AI (M = 13.9, SD = 3.2). AI-CDSS reliance was negatively correlated with decision regret (r = -0.42, p < 0.01) and positively with trust in AI (r = 0.51, p < 0.01). Regression analysis showed that both AI-CDSS reliance (β = -0.36) and trust in AI (β = -0.24) significantly predicted reduced regret (R2 = 0.27, p < 0.001). Trust moderated the relationship, strengthening the negative association between reliance and regret. Greater reliance on AI-CDSS is associated with lower decision regret among ICU nurses, especially when trust in AI is high. Trust enhances emotional acceptance and supports effective AI integration. Building trust in AI-CDSS among nurses is essential for minimising emotional burden and optimising decision-making in critical care.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2305226
Understanding adversarial attack and defense towards deep compressed neural networks
  • May 3, 2018
  • Qi Liu + 2 more

Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting appli- cations such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of- the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely “adversarial examples”, causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.

  • Research Article
  • Cite Count Icon 9
  • 10.2196/56306
Finding Consensus on Trust in AI in Health Care: Recommendations From a Panel of International Experts.
  • Feb 19, 2025
  • Journal of medical Internet research
  • Georg Starke + 17 more

The integration of artificial intelligence (AI) into health care has become a crucial element in the digital transformation of health systems worldwide. Despite the potential benefits across diverse medical domains, a significant barrier to the successful adoption of AI systems in health care applications remains the prevailing low user trust in these technologies. Crucially, this challenge is exacerbated by the lack of consensus among experts from different disciplines on the definition of trust in AI within the health care sector. We aimed to provide the first consensus-based analysis of trust in AI in health care based on an interdisciplinary panel of experts from different domains. Our findings can be used to address the problem of defining trust in AI in health care applications, fostering the discussion of concrete real-world health care scenarios in which humans interact with AI systems explicitly. We used a combination of framework analysis and a 3-step consensus process involving 18 international experts from the fields of computer science, medicine, philosophy of technology, ethics, and social sciences. Our process consisted of a synchronous phase during an expert workshop where we discussed the notion of trust in AI in health care applications, defined an initial framework of important elements of trust to guide our analysis, and agreed on 5 case studies. This was followed by a 2-step iterative, asynchronous process in which the authors further developed, discussed, and refined notions of trust with respect to these specific cases. Our consensus process identified key contextual factors of trust, namely, an AI system's environment, the actors involved, and framing factors, and analyzed causes and effects of trust in AI in health care. Our findings revealed that certain factors were applicable across all discussed cases yet also pointed to the need for a fine-grained, multidisciplinary analysis bridging human-centered and technology-centered approaches. While regulatory boundaries and technological design features are critical to successful AI implementation in health care, ultimately, communication and positive lived experiences with AI systems will be at the forefront of user trust. Our expert consensus allowed us to formulate concrete recommendations for future research on trust in AI in health care applications. This paper advocates for a more refined and nuanced conceptual understanding of trust in the context of AI in health care. By synthesizing insights into commonalities and differences among specific case studies, this paper establishes a foundational basis for future debates and discussions on trusting AI in health care.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.