Behavioral Dynamics of AI Trust and Health Care Delays Among Adults: Integrated Cross-Sectional Survey and Agent-Based Modeling Study.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

While artificial intelligence (AI) holds significant promise for health care, excessive trust in these tools may unintentionally delay patients from seeking professional care, particularly among patients with chronic illnesses. However, the behavioral dynamics underlying this phenomenon remain poorly understood. This study aims to quantify the influence of AI trust on health care delays through integrated survey-based mediation analysis and real-world research, and to simulate intervention efficacy using agent-based modeling (ABM). A cross-sectional online survey was conducted in China from December 2024 to May 2025. Participants were recruited via convenience sampling on social media (WeChat and QQ) and hospital portals. The survey included a 21-item questionnaire measuring AI trust (5-point Likert scale), AI usage frequency (6-point scale), chronic disease status (physician-diagnosed, binary), and self-reported health care delay (binary). Responses with completion time <90 seconds, logical inconsistencies, missing values, or duplicates were excluded. Analyses included descriptive statistics, multivariable logistic regression (α=.05), mediation analysis with nonparametric bootstrapping (500 iterations), and moderation testing. Subsequently, an ABM simulated 2460 agents within a small-world network over 14 days to model behavioral feedback and test 3 interventions: broadcast messaging, behavioral reward, and network rewiring. The final sample included 2460 adults (mean age 34.46, SD 11.62 years; n=1345, 54.7% female). Higher AI trust was associated with increased odds of delays (odds ratio [OR] 1.09, 95% CI 1.00-1.18; P=.04), with usage frequency partially mediating this relationship (indirect OR 1.24, 95% CI 1.20-1.29; P<.001). Chronic disease status amplified the delay odds (OR 1.42, 95% CI 1.09-1.86; P=.01). The ABM demonstrated a bidirectional trust erosion loop, with population delay rates declining from 10.6% to 9.5% as mean AI trust decreased from 1.91 to 1.52. Interventions simulation found broadcast messaging most effective in reducing delay odds (OR 0.94, 95% CI 0.94-0.95; P<.001), whereas network rewiring increased odds (OR 1.04, 95% CI 1.04-1.05; P<.001), suggesting a "trust polarization" effect. This study reveals a nuanced relationship between AI trust and delayed health care-seeking. While trust in AI enhances engagement, it can also lead to delayed care, particularly among patients with chronic conditions or frequent AI users. Integrating survey data with ABM highlights how AI trust and delay behaviors can strengthen one another over time. Our findings indicate that AI health tools should prioritize calibrated decision support rather than full automation to balance autonomy, odds, and decision quality in digital health. Unlike previous studies that focus solely on static associations, this research emphasizes the dynamic interactions between AI trust and delay behaviors.

Similar Papers
  • Research Article
  • 10.3390/healthcare14040506
Determinants of Trust in Artificial Intelligence (AI) for Health-Related Decision-Making Among Adults in Saudi Arabia: A Cross-Sectional Study.
  • Feb 16, 2026
  • Healthcare (Basel, Switzerland)
  • Bandar S Alharbi + 2 more

Artificial intelligence (AI) is increasingly integrated into healthcare decision-making. Public trust in AI remains a critical determinant of its acceptance and effective use. Evidence on the factors shaping trust in AI within Middle Eastern contexts, particularly Saudi Arabia, remains limited. Therefore, we aimed to identify the determinants of trust in AI for health-related decision-making and to examine a theory-informed mediation pathway in which patient satisfaction mediates the association between patient-doctor relationships and trust in AI. We conducted a cross-sectional, facility-based survey of adults in Saudi Arabia, using an electronic questionnaire distributed in four primary healthcare centers. We performed multiple linear regression to assess the association of trust in AI for health-related decision-making with patient satisfaction, patient-doctor relationships, sociodemographic characteristics, and healthcare-related factors. A mediation analysis was also employed to evaluate the indirect and direct association linking patient-doctor relationships, patient satisfaction, and trust in AI. Our findings showed that patient satisfaction was positively associated with trust in AI (β = 0.54, 95% CI: 0.18-0.90), while patient-doctor relationships showed an inverse association (β = -0.34, 95% CI: -0.48 to -0.20), possibly reflecting a greater reliance on physicians' clinical judgment and a reduced perceived need for AI-supported decision-making. Trust in AI varied across age groups, with a lower trust observed in older age categories compared with younger adults. No strong associations were observed for sex, education, body mass index, or healthcare-related factors. Patient-doctor relationship quality was indirectly associated with trust in AI via patient satisfaction (ACME = 0.138, 95% CI: 0.043-0.246), alongside a direct association with trust in AI (ADE = -0.313, 95% CI: -0.456 to -0.160). This means that patient-doctor relationships influenced trust in AI both directly and indirectly through patient satisfaction, suggesting that, while interpersonal care may reduce the reliance on AI (direct effect), enhancing patient satisfaction can partially offset this effect and promote trust in AI (indirect effect). These findings highlight that fostering patient-centered care and satisfaction may be crucial for promoting public trust in AI, which has important implications for AI governance, ethical deployment, and the design of AI-supported healthcare systems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 60
  • 10.2196/53207
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.
  • Oct 30, 2024
  • JMIR AI
  • Rikard Rosenbacke + 3 more

Artificial intelligence (AI) has significant potential in clinical practice. However, its "black box" nature can lead clinicians to question its value. The challenge is to create sufficient trust for clinicians to feel comfortable using AI, but not so much that they defer to it even when it produces results that conflict with their clinical judgment in ways that lead to incorrect decisions. Explainable AI (XAI) aims to address this by providing explanations of how AI algorithms reach their conclusions. However, it remains unclear whether such explanations foster an appropriate degree of trust to ensure the optimal use of AI in clinical practice. This study aims to systematically review and synthesize empirical evidence on the impact of XAI on clinicians' trust in AI-driven clinical decision-making. A systematic review was conducted in accordance with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searching PubMed and Web of Science databases. Studies were included if they empirically measured the impact of XAI on clinicians' trust using cognition- or affect-based measures. Out of 778 articles screened, 10 met the inclusion criteria. We assessed the risk of bias using standard tools appropriate to the methodology of each paper. The risk of bias in all papers was moderate or moderate to high. All included studies operationalized trust primarily through cognitive-based definitions, with 2 also incorporating affect-based measures. Out of these, 5 studies reported that XAI increased clinicians' trust compared with standard AI, particularly when the explanations were clear, concise, and relevant to clinical practice. In addition, 3 studies found no significant effect of XAI on trust, and the presence of explanations does not automatically improve trust. Notably, 2 studies highlighted that XAI could either enhance or diminish trust, depending on the complexity and coherence of the provided explanations. The majority of studies suggest that XAI has the potential to enhance clinicians' trust in recommendations generated by AI. However, complex or contradictory explanations can undermine this trust. More critically, trust in AI is not inherently beneficial, as AI recommendations are not infallible. These findings underscore the nuanced role of explanation quality and suggest that trust can be modulated through the careful design of XAI systems. Excessive trust in incorrect advice generated by AI can adversely impact clinical accuracy, just as can happen when correct advice is distrusted. Future research should focus on refining both cognitive and affect-based measures of trust and on developing strategies to achieve an appropriate balance in terms of trust, preventing both blind trust and undue skepticism. Optimizing trust in AI systems is essential for their effective integration into clinical practice.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 59
  • 10.3390/bs12050127
Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort
  • Apr 27, 2022
  • Behavioral Sciences
  • Liangru Yu + 1 more

The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans’ trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans’ trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans’ trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees’ perceived transparency, employees’ perceived effectiveness of AI, and employees’ discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees’ trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans’ trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees’ trust in AI, so that employees can better collaborate with AI.

  • Research Article
  • Cite Count Icon 1
  • 10.1155/hbe2/4084384
Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI‐Based Technologies
  • Jan 1, 2025
  • Human Behavior and Emerging Technologies
  • Sulfikar Amir + 4 more

What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI‐based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross‐country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI‐based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.

  • Research Article
  • Cite Count Icon 8
  • 10.1017/pen.2022.5
Trust toward humans and trust toward artificial intelligence are not associated: Initial insights from self-report and neurostructural brain imaging.
  • Jan 1, 2023
  • Personality neuroscience
  • Christian Montag + 8 more

The present study examines whether self-reported trust in humans and self-reported trust in [(different) products with built-in] artificial intelligence (AI) are associated with one another and with brain structure. We sampled 90 healthy participants who provided self-reported trust in humans and AI and underwent brain structural magnetic resonance imaging assessment. We found that trust in humans, as measured by the trust facet of the personality inventory NEO-PI-R, and trust in AI products, as measured by items assessing attitudes toward AI and by a composite score based on items assessing trust toward products with in-built AI, were not significantly correlated. We also used a concomitant dimensional neuroimaging approach employing a data-driven source-based morphometry (SBM) analysis of gray-matter-density to investigate neurostructural associations with each trust domain. We found that trust in humans was negatively (and significantly) correlated with an SBM component encompassing striato-thalamic and prefrontal regions. We did not observe significant brain structural association with trust in AI. The present findings provide evidence that trust in humans and trust in AI seem to be dissociable constructs. While the personal disposition to trust in humans might be "hardwired" to the brain's neurostructural architecture (at least from an individual differences perspective), a corresponding significant link for the disposition to trust AI was not observed. These findings represent an initial step toward elucidating how different forms of trust might be processed on the behavioral and brain level.

  • Research Article
  • Cite Count Icon 7
  • 10.2196/71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop
  • Jun 2, 2025
  • Journal of Medical Internet Research
  • Melanie Goisauf + 10 more

Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Preprint Article
  • 10.2196/preprints.71236
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop (Preprint)
  • Jan 13, 2025
  • Melanie Goisauf + 10 more

UNSTRUCTURED Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.

  • Conference Article
  • Cite Count Icon 85
  • 10.1145/3531146.3533202
How Explainability Contributes to Trust in AI
  • Jun 20, 2022
  • Andrea Ferrario + 1 more

We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. We discuss the latter by referencing an account of trust, called “trust as anti-monitoring,” that different authors contributed developing. We focus our analysis on the case of medical AI systems, noting that our proposal is compatible with internalist and externalist justifications of trustworthiness of medical AI and recent accounts of warranted contractual trust. We propose that “explainability fosters trust in AI” if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. We argue that our proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, we apply our account to user’s trust in AI, where, we argue, explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human, we argue, it is possible for explainability to contribute to trust. Our account can explain the apparent paradox that in order to trust AI, we must trust AI users not to trust AI completely. Summing up, we can explain how explainability contributes to justified trust in AI, without leaving a reliabilist framework, but only by redefining the trusted entity as an AI-user dyad.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 25
  • 10.3390/en14071942
Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland
  • Apr 1, 2021
  • Energies
  • Justyna Łapińska + 4 more

The use of artificial intelligence (AI) in companies is advancing rapidly. Consequently, multidisciplinary research on AI in business has developed dramatically during the last decade, moving from the focus on technological objectives towards an interest in human users’ perspective. In this article, we investigate the notion of employees’ trust in AI at the workplace (in the company), following a human-centered approach that considers AI integration in business from the employees’ perspective, taking into account the elements that facilitate human trust in AI. While employees’ trust in AI at the workplace seems critical, so far, few studies have systematically investigated its determinants. Therefore, this study is an attempt to fill the existing research gap. The research objective of the article is to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). A quantitative study conducted on a sample of 428 employees from companies of the energy and chemical industries in Poland allowed the hypotheses to be verified. The hypotheses were tested using structural equation modeling (SEM). The results indicate the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.1038/s41598-021-92904-7
An empirical investigation of trust in AI in a Chinese petrochemical enterprise based on institutional theory
  • Jun 30, 2021
  • Scientific Reports
  • Jia Li + 3 more

Despite its considerable potential in the manufacturing industry, the application of artificial intelligence (AI) in the industry still faces the challenge of insufficient trust. Since AI is a black box with operations that ordinary users have difficulty understanding, users in organizations rely on institutional cues to make decisions about their trust in AI. Therefore, this study investigates trust in AI in the manufacturing industry from an institutional perspective. We identify three institutional dimensions from institutional theory and conceptualize them as management commitment (regulative dimension at the organizational level), authoritarian leadership (normative dimension at the group level), and trust in the AI promoter (cognitive dimension at the individual level). We hypothesize that all three institutional dimensions have positive effects on trust in AI. In addition, we propose hypotheses regarding the moderating effects of AI self-efficacy on these three institutional dimensions. A survey was conducted in a large petrochemical enterprise in eastern China just after the company had launched an AI-based diagnostics system for fault detection and isolation in process equipment service. The results indicate that management commitment, authoritarian leadership, and trust in the AI promoter are all positively related to trust in AI. Moreover, the effect of management commitment and trust in the AI promoter are strengthened when users have high AI self-efficacy. The findings of this study provide suggestions for academics and managers with respect to promoting users’ trust in AI in the manufacturing industry.

  • Research Article
  • Cite Count Icon 11
  • 10.1037/xge0001696
When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.
  • Feb 1, 2025
  • Journal of experimental psychology. General
  • Fanny Lalot + 1 more

The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

  • Research Article
  • Cite Count Icon 69
  • 10.1016/j.ijhcs.2022.102792
The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending
  • Feb 12, 2022
  • International Journal of Human-Computer Studies
  • Murat Dikmen + 1 more

Increasingly, artificial intelligence (AI) is being used to assist complex decision-making such as financial investing. However, there are concerns regarding the black-box nature of AI algorithms. The field of explainable AI (XAI) has emerged to address these concerns. XAI techniques can reveal how an AI decision is formed and can be used to understand and appropriately trust an AI system. However, XAI techniques still may not be human-centred and may not support human decision-making adequately. In this work, we explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI. We measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment. In a peer-to-peer lending simulator, non-expert participants made financial investments using an AI assistant. The presence or absence of domain knowledge was manipulated. The results showed that participants who had access to domain knowledge relied less on the AI assistant when the AI assistant was incorrect and indicated less trust in AI assistant. However, overall investing performance was not affected. These results suggest that providing domain knowledge can influence how non-expert users use AI and could be a powerful tool to help these users develop appropriate levels of trust and reliance.

  • Research Article
  • Cite Count Icon 5
  • 10.1145/3686963
"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AI
  • Nov 7, 2024
  • Proceedings of the ACM on Human-Computer Interaction
  • Tina B Lassiter + 1 more

Artificial Intelligence (AI) auditing is a relatively new area of work. Currently, there is a lack of uniform standards and regulation. As a result, the AI auditing ecosystem is very diverse, and AI auditing professionals use a variety of different auditing methods. So far, little is known about how AI auditors approach the concept of trust in AI through AI audits, in particular regarding the trust of users. This paper reports findings from interviews with 19 AI auditing stakeholders to understand how AI auditing professionals seek to create calibrated trust in AI tools and AI audits. Themes identified included the AI auditing ecosystem, participants' experiences with AI auditing, and trust in AI audits and AI. The paper adds to the existing research on trust in AI and trustworthiness in AI by adding perspectives of key stakeholders regarding trust in AI Audits by users as an essential and currently less explored part of the trust in AI research. This paper shows how information asymmetry in respect to AI audits can decrease the value of audits for users and consequently their trust in AI systems. Study participants suggest key elements for rebuilding trust and suggest recommendations for the AI auditing industry, such as monitoring of auditors and effective communication about AI audits.

  • Research Article
  • Cite Count Icon 2
  • 10.1111/nicc.70157
When Machines Decide: Exploring How Trust in AI Shapes the Relationship Between Clinical Decision Support Systems and Nurses' Decision Regret: A Cross-Sectional Study.
  • Aug 26, 2025
  • Nursing in critical care
  • Nadia Hassan Ali Awad + 5 more

Artificial intelligence (AI)-based Clinical Decision Support Systems (AI-CDSS) are increasingly implemented in intensive care settings to support nurses in complex, time-sensitive decisions, aiming to improve accuracy, efficiency and patient outcomes. However, their use raises concerns about emotional consequences, particularly decision regret, which may arise when clinical judgement or outcomes are unfavourable. Trust in AI may play a key role in shaping nurses' responses to AI-guided decisions. To examine the relationship between nurses' reliance on AI-CDSS, decision regret and trust in AI, with a focus on the moderating role of trust in the association between AI-CDSS reliance and decision regret. A cross-sectional correlational design was used. A convenience sample of 250 intensive care unit (ICU) nurses completed validated instruments: the Healthcare Systems Usability Scale (HSUS) for AI-CDSS reliance, the Decision Regret Scale (DRS) and the Trust in AI Scale. Descriptive statistics, Pearson's correlations, multiple linear regression and moderation analysis were conducted. A total of 250 ICU nurses participated in the study out of 400 approached, yielding a response rate of 62.5%. Nurses reported moderate levels of AI-CDSS reliance (M = 78.6, SD = 12.4), decision regret (M = 38.5, SD = 14.8) and trust in AI (M = 13.9, SD = 3.2). AI-CDSS reliance was negatively correlated with decision regret (r = -0.42, p < 0.01) and positively with trust in AI (r = 0.51, p < 0.01). Regression analysis showed that both AI-CDSS reliance (β = -0.36) and trust in AI (β = -0.24) significantly predicted reduced regret (R2 = 0.27, p < 0.001). Trust moderated the relationship, strengthening the negative association between reliance and regret. Greater reliance on AI-CDSS is associated with lower decision regret among ICU nurses, especially when trust in AI is high. Trust enhances emotional acceptance and supports effective AI integration. Building trust in AI-CDSS among nurses is essential for minimising emotional burden and optimising decision-making in critical care.

  • Research Article
  • Cite Count Icon 135
  • 10.1016/j.techfore.2022.121763
To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts
  • May 28, 2022
  • Technological Forecasting and Social Change
  • Nessrine Omrani + 4 more

To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.