Trust Formation, Error Impact, and Repair in Human-AI Financial Advisory: A Dynamic Behavioral Analysis.
Understanding how trust in artificial intelligence evolves is crucial for predicting human behavior in AI-enabled environments. While existing research focuses on initial acceptance factors, the temporal dynamics of AI trust remain poorly understood. This study develops a temporal trust dynamics framework proposing three phases: formation through accuracy cues, single-error shock, and post-error repair through explanations. Two experiments in financial advisory contexts tested this framework. Study 1 (N = 189) compared human versus algorithmic advisors, while Study 2 (N = 294) traced trust trajectories across three rounds, manipulating accuracy and post-error explanations. Results demonstrate three temporal patterns. First, participants initially favored algorithmic advisors, supporting "algorithmic appreciation." Second, single advisory errors resulted in substantial trust decline (η2 = 0.141), demonstrating acute sensitivity to performance failures. Third, post-error explanations significantly facilitated trust recovery, with evidence of enhancement beyond baseline. Financial literacy moderated these patterns, with higher-expertise users showing sharper decline after errors and stronger recovery following explanations. These findings reveal that AI trust follows predictable temporal patterns distinct from interpersonal trust, exhibiting heightened error sensitivity yet remaining amenable to repair through well-designed explanatory interventions. They offer theoretical integration of appreciation and aversion phenomena and practical guidance for designing inclusive AI systems.
3004
- 10.1257/jel.52.1.5
- May 4, 2013
- Journal of Economic Literature
2125
- 10.1518/hfes.46.1.50_30392
- Jan 1, 2004
- Human Factors: The Journal of the Human Factors and Ergonomics Society
- 10.1016/j.chbr.2025.100667
- May 1, 2025
- Computers in Human Behavior Reports
3186
- 10.1016/j.artint.2018.07.007
- Oct 27, 2018
- Artificial Intelligence
426
- 10.1177/1077699015606057
- Oct 5, 2015
- Journalism & Mass Communication Quarterly
232
- 10.1518/001872006777724408
- Jun 1, 2006
- Human Factors: The Journal of the Human Factors and Ergonomics Society
939
- 10.1016/j.obhdp.2018.12.005
- Feb 5, 2019
- Organizational Behavior and Human Decision Processes
10
- 10.4324/9781315095080
- Jul 5, 2017
2748
- 10.1518/hfes.46.1.50.30392
- Jan 1, 2004
- Human Factors: The Journal of the Human Factors and Ergonomics Society
188
- 10.1080/14639220500535301
- Jul 1, 2007
- Theoretical Issues in Ergonomics Science
- Research Article
- 10.1037/xge0001696
- Feb 1, 2025
- Journal of experimental psychology. General
The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
- Research Article
- 10.5465/ambpp.2022.15302symposium
- Aug 1, 2022
- Academy of Management Proceedings
Artificial Intelligence (AI) is transforming the way we work. The autonomous quality of AI and its ability to perform tasks that previously required human intelligence present a new set of trust challenges to organizations and employees and is reconfiguring the relationship between humans and technology. This symposium responds to a growing consensus of the need to understand employee trust in the context of AI at work. It showcases global research using diverse methodologies to advance novel empirical insights and conceptual developments on 1) the nature and determinants of employee trust and acceptance of AI systems at work, 2) how leaders and employees can manage and navigate AI technology adoption in a way that is enabling, human-centric, and supportive of trust, and 3) how AI integration is affecting trust relationships at work. Understanding Employee Trust in AI-Enabled HR Processes: A Multinational Survey Presenter: Steve Lockey; U. of Queensland Presenter: Nicole Gillespie; U. of Queensland Presenter: Caitlin Curtis; U. of Queensland Trust of Algorithmic Evaluations: On the Importance of Voice Opportunities and Humble Leadership Presenter: Jack McGuire; National U. of Singapore Presenter: David De Cremer; NUS Business School Presenter: Devesh Narayanan; National U. of Singapore Full speed ahead? Exploring the double-edged impact of smart workplace technology on employees Presenter: Simon Daniel Schafheitle; U. of Twente Presenter: Antoinette Weibel; U. of St. Gallen Presenter: Christophe Schank; U. of Vechta Maintaining Employee Trust in Adopting Artificial Intelligence to Augment Team Knowledge Work Presenter: Kirsimarja Blomqvist; LUT U. Presenter: Paula Strann; LUT U. Presenter: Dominik Siemon; LUT U. The Dynamics of Trust in AI and Interpersonal Trust in Organizations Presenter: Brian Park; Georgia State U. Presenter: C. Ashley Fulmer; Georgia State U. Presenter: David Lehman; U. of Virginia
- Research Article
- 10.33693/2223-0092-2023-13-1-74-79
- Feb 15, 2023
- Sociopolitical Sciences
This article discusses the issue related to the sociological analysis of the role of the media in the formation of trust in artificial intelligence. The author studied the fields of application of artificial intelligence, its relationship with the media, as well as the concept of natural language processing. Natural language processing is one of the newest technologies of artificial intelligence application and can be integrated by mass media into the process of writing reports and texts. An important place in this work is given to the authors research conducted in August 2022. The purpose of this study was to identify the relationship between how people relate to the media and artificial intelligence, and how they evaluate headlines created by journalists or AI. The results of the study revealed similarities between texts compiled by artificial intelligence and texts written by journalists. In the final part of the article, the authors formulated the main recommendations for increasing confidence in artificial intelligence in the context of media influence on it. The key conclusion of this scientific work is the following aspect: with automatic text generation, consumers perception of the quality of news plays an important role in establishing the relationship between people and artificial intelligence.
- Research Article
109
- 10.1016/j.techfore.2022.121763
- May 28, 2022
- Technological Forecasting and Social Change
To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts
- Research Article
49
- 10.1108/intr-07-2021-0446
- Feb 2, 2023
- Internet Research
PurposeThe deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has accelerated, it also raises some trust challenges. Literature on trust in AI is scant, especially regarding the vulnerabilities faced by different stakeholders to inform policy and practice. This work proposes a framework to understand the use of AI technologies from the perspectives of institutional and the self to understand the formation of trust in the mandated use of AI-based technologies in travelers.Design/methodology/approachAn empirical investigation using partial least squares-structural equation modeling was employed on responses from 209 users. This paper considered factors related to the self (perceptions of self-threat, privacy empowerment, trust propensity) and institution (regulatory protection, corporate privacy responsibility) to understand the formation of trust in AI use for travelers.FindingsResults showed that self-threat, trust propensity and regulatory protection influence trust in users on AI use. Privacy empowerment and corporate responsibility do not.Originality/valueInsights from the past studies on AI in travel and tourism are limited. This study advances current literature on affordance and reactance theories to provide a better understanding of what makes travelers trust the mandated use of AI technologies. This work also demonstrates the paradoxical effects of self and institution on technologies and their relationship to trust. For practice, this study offers insights for enhancing adoption via developing trust.
- Research Article
16
- 10.1016/j.ins.2024.120759
- May 21, 2024
- Information Sciences
VIRTSI: A novel trust dynamics model enhancing Artificial Intelligence collaboration with human users – Insights from a ChatGPT evaluation study
- Research Article
2
- 10.1177/1071181322661098
- Sep 1, 2022
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with the increasing amount of information available to them. Trust is a complex, dynamic phenomenon, which drives adoption (or disuse) of technology. We conducted a naturalistic study with intelligence professionals (planners, collectors, analysts, etc.) to understand trust dynamics with AI systems. We found that on a long-enough time scale, trust in AI self-repaired after incidents where trust was lost, usually based merely on the assumption that AI had improved since participants last interacted with it. Similarly, we found that trust in AI increased over time after incidents where trust was gained in the AI. We termed this general trend “buoyant trust in AI,” where trust in AI tends to increase over time, regardless of previous interactions with the system. Key findings are discussed, along with possible directions for future research.
- Research Article
20
- 10.3389/fpsyg.2024.1382693
- Apr 17, 2024
- Frontiers in psychology
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
- Book Chapter
1
- 10.1007/978-981-16-9492-9_212
- Jan 1, 2022
Human trust in Artificial Intelligence is one of the core issues affecting the development of human-machine cooperation. At present, the research on trust in human-computer cooperation mainly comes from the computer science research field, focusing on how to build, realize and optimize the computing and processing power of Artificial Intelligence system for specific tasks. The research on “what factors affect people’s trust in artificial intelligence system and how to accurately measure people’s trust in Artificial Intelligence system” is still in its infancy, and there is still a lack of empirical studies involving human users or rigorous behavioral science experimental methods in such studies. This paper reviews the research approaches to interpersonal trust in the field of behavioral science. The trust relationship between human and Artificial Intelligence system is discussed, and the influencing factors of human individual’s trust attitude towards Artificial Intelligence system are analyzed. This paper provides a theoretical basis for the establishment of trust calculation model in human-machine cooperation.KeywordsHuman-machine trustBehavioral scienceErgonomics
- Research Article
- 10.55041/ijsrem28468
- Feb 8, 2024
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
With artificial intelligence (AI) continuing to pervade many aspects of society, it is critical to comprehend the dynamics of trust in AI decision-making and human-AI interaction. This study explores the many facets of trust and looks at how important it is in influencing user attitudes, actions, and the general effectiveness of AI systems. In order to understand the complex interactions between intelligent machines and people, the research incorporates multidisciplinary viewpoints from the fields of psychology, human-computer interaction, and ethics. The first area of inquiry is what influences the creation of first faith in AI. [1]We investigate how consumers' desire to trust AI-driven technology is influenced by system transparency, explain ability, and user experience through empirical study. The creation of design concepts intended to build a foundation of trust in AI systems is informed by insights gained at this stage. The second aspect of the study focuses on how trust changes over time in extended encounters between humans and artificial intelligence. We study the dynamics of trust-building and erosion by monitoring user experiences and system performance. This helps to clarify the critical points and factors that affect the trust's trajectory. This long-term viewpoint aids in the creation of adaptable artificial intelligence systems that can adapt to changing user demands and address issues with trust. The third line of investigation concerns the function of trust in AI-influenced decision-making processes. We evaluate the extent to which users depend on AI-generated insights and the influence of trust on decision outcomes using experimental scenarios and real-world case studies. This stage clarifies the fine balance needed to maximise the collaboration between AI and humans and emphasises the significance of matching AI suggestions with user values. The research concludes with an examination of the consequences of trust in AI for wider societal contexts, with a focus on ethical issues. We look at accountability frameworks, the potential fallout from blind trust, and the moral obligations of AI engineers in creating reliable systems. In order to foster a symbiotic relationship between humans and intelligent systems in a world increasingly driven by AI, this thorough investigation of the role of trust in human-AI interaction and decision-making ultimately aims to provide actionable insights for the design, implementation, and governance of AI technologies.
- Research Article
- 10.1108/ajim-11-2024-0898
- May 27, 2025
- Aslib Journal of Information Management
PurposeThis paper aims to provide a comprehensive bibliometric analysis of AI research focusing on trust, credibility, and related issues in automated systems across diverse fields. This study offers a brief systematic literature review by identifying key themes and trends within the literature, emphasizing the critical role of trust in AI systems such as autonomous robotics, software engineering, and human-agent interactions.Design/methodology/approachUsing a bibliometric approach, data were collected from Scopus spanning the years 1987–2024. The study systematically analyzes publication types, collaborative patterns, subject areas, and citation impact. It also identifies key thematic areas related to trust and credibility in AI applications, such as mobile ad hoc networks (MANETs), peer-to-peer networks, and decision-making in uncertainty.FindingsA total of 111 papers were published between 1987 and 2024, with an average of 2.92 publications annually, 60.36% of which appeared in journals. Collaboration has significantly increased, with an average of 3.48 authors per paper. The period from 2020 to 2024 witnessed a surge in both publications (41 in 2024) and authorship (139 authors). Key contributors, such as Yang from the University of Michigan and high-impact authors from Purdue University, highlight the global scope of AI research. The systematic review identifies central themes such as “trust dynamics,” “credibility assessment,” and “human-robot interaction” as crucial areas within the literature.Research limitations/implicationsThe study emphasizes the need for better visibility for conference proceedings and emerging research topics, such as reinforcement learning. The analysis also reveals the growing significance of trust and credibility in AI-driven systems, especially as AI becomes more integrated into decision-making processes, providing a roadmap for future research.Practical implicationsPublishing in high-impact journals such as the Journal of Management Information Systems significantly enhances research visibility, while other journals may require strategies to improve their citation potential. As AI applications continue to expand, themes like trust and credibility assessment are essential for fostering effective human-AI collaboration across interdisciplinary fields.Originality/valueThis study delivers a unique combination of bibliometric analysis and a systematic literature review, shedding light on key research trends in AI, particularly in the context of trust and credibility. It provides valuable insights into collaborative research patterns, institutional contributions, and the evolution of trust-related themes, positioning it as a key reference for future exploration of AI and trust dynamics.
- Research Article
445
- 10.1111/1475-6773.01070
- Oct 1, 2002
- Health Services Research
To develop and test a multi-item measure for general trust in physicians, in contrast with trust in a specific physician. Random national telephone survey of 502 adult subjects with a regular physician and source of payment. Based on a multidimensional conceptual model, a large pool of candidate items was generated, tested, and revised using focus groups, expert reviewers, and pilot testing. The scale was analyzed for its factor structure, internal consistency, construct validity, and other psychometric properties. The resulting 11-item scale measuring trust in physicians generally is consistent with most aspects of the conceptual model except that it does not include the dimension of confidentiality. This scale has a single-factor structure, good internal consistency (alpha = .89), and good response variability (range = 11-54; mean = 33.5; SD = 6.9). This scale is related to satisfaction with care, trust in one's physician, following doctors' recommendations, having no prior disputes with physicians, not having sought second opinions, and not having changed doctors. No association was found with race/ethnicity. While general trust and interpersonal trust are qualitatively similar, they are only moderately correlated with each other and general trust is substantially lower. Emerging research on patients' trust has focused on interpersonal trust in a specific, known physician. Trust in physicians in general is also important and differs significantly from interpersonal physician trust. General physician trust potentially has a strong influence on important behaviors and attitudes, and on the formation of interpersonal physician trust.
- Conference Article
- 10.15405/epsbs.2021.09.02.125
- Sep 25, 2021
The article examines the problems of the development of schooling forms in the context of the trust of the subjects of the educational process in educational institutions. The purpose of the study is to consider the development of homeschooling forms in the context of the trust in the educational system. The research is based on theoretical approaches that consider trust as an interpersonal reflection that ensures cooperation and cooperation in social interactions at the interpersonal level, as well as as a social capital of the educational system that ensures the effectiveness of the educational system as an important social institution in general. The author examines the positions of parents in choosing educational trajectories of children's education in the context of the trust in the schooling system. To identify the positions of parents, the analysis of secondary data was used. The author reveals the opinions of parents about trust in school as an element of the educational system and reveals the reasons for choosing homeschooling forms as alternatives to full-time education in the context of trust (institutional trust, interpersonal trust). It is shown that subjective experience is a significant factor in the formation of trust, affecting the formation of both institutional trust in social institutions of education in general, and interpersonal trust in the system of educational communications of participants in the educational process. The author defines the problems of the development of homeschooling forms in the context of trust in school educational institutions.
- Conference Article
327
- 10.1145/3442188.3445923
- Mar 1, 2021
Trust is a central component of the interaction between people and AI, in that 'incorrect' levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the cognitive mechanism of trust, and how can we promote them, or assess whether they are being satisfied in a given interaction? This work aims to answer these questions. We discuss a model of trust inspired by, but not identical to, interpersonal trust (i.e., trust between people) as defined by sociologists. This model rests on two key properties: the vulnerability of the user; and the ability to anticipate the impact of the AI model's decisions. We incorporate a formalization of 'contractual trust', such that trust between a user and an AI model is trust that some implicit or explicit contract will hold, and a formalization of 'trustworthiness' (that detaches from the notion of trustworthiness in sociology), and with it concepts of 'warranted' and 'unwarranted' trust. We present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.
- Research Article
- 10.64252/9thtr391
- Sep 27, 2025
- International Journal of Environmental Sciences
This study explores the impact of artificial intelligence (AI) and machine learning (ML) on the investment strategies of engineering faculty members in the Bangalore region. As financial markets become more complex, these technologies present significant opportunities to enhance portfolio management. The primary goal of this research is to examine how faculty members are incorporating AI and ML tools into their personal investment practices to improve portfolio performance, manage risks, and make informed decisions. Additionally, the study highlights gaps in knowledge, accessibility, and application of these technologies, offering important insights into areas needing further development. By conducting surveys and interviews, data will be gathered on faculty members' awareness, usage patterns, and trust in AI and ML-driven investment approaches. The research will evaluate their understanding of the advantages of using AI, such as real-time data analysis, predictive modeling, and automated decision-making, as well as the obstacles they encounter, including a lack of technical expertise or skepticism regarding the reliability of these tools. The findings from this study will help identify current challenges and provide actionable recommendations for improving the adoption of AI and ML in portfolio management among engineering faculty. These insights will also be useful for educational institutions, financial advisors, and technology developers in customizing AI solutions to better align with the investment management needs of academic professionals. This research will contribute to both the academic and investment sectors by bridging the gap between traditional investment methods and technology-driven solutions, ultimately enabling faculty members to make more informed investment decisions.
- Research Article
- 10.3390/bs15101428
- Oct 21, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101429
- Oct 21, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101430
- Oct 21, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101423
- Oct 20, 2025
- Behavioral sciences (Basel, Switzerland)
- Addendum
- 10.3390/bs15101422
- Oct 20, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101420
- Oct 19, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101418
- Oct 18, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101416
- Oct 17, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101413
- Oct 17, 2025
- Behavioral sciences (Basel, Switzerland)
- Research Article
- 10.3390/bs15101409
- Oct 16, 2025
- Behavioral sciences (Basel, Switzerland)
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.