Abstract

Algorithms are increasingly playing a pivotal role in organizations' day-to-day operations; however, a general distrust of artificial intelligence-based algorithms and automated processes persists. This aversion to algorithms raises questions about the drivers that lead managers to trust or reject their use. This conceptual paper aims to provide an integrated review of how users experience the encounter with AI-based algorithms over time. This is important for two reasons: first, their functional activities change over the course of time through machine learning; and second, users' trust develops with their level of knowledge of a particular algorithm. Based on our review, we propose an integrative framework to explain how users’ perceptions of trust change over time. This framework extends current understandings of trust in AI-based algorithms in two areas: First, it distinguishes between the formation of initial trust and trust over time in AI-based algorithms, and specifies the determinants of trust in each phase. Second, it links the transition between initial trust in AI-based algorithms and trust over time to representations of the technology as either human-like or system-like. Finally, it considers the additional determinants that intervene during this transition phase.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call