Abstract

The use of algorithm recommendation systems (ARS), which collect and analyze users' personal data to generate personalized and tailored recommended items, has become widespread in the era of mobile internet. While related literature has focused on privacy issues and trust toward ARS or recommended items, societal aspects such as social trust (toward people and organizations responsible for an algorithm) and algorithmic equity have been overlooked. Drawing on the theory of social trust, we investigate the psychological mechanism between social trust and users' intention to interact with ARS (e.g., click recommended items and continuously use). Based on a survey of young mobile internet users in China, we show that, first, social trust is positively associated with perceived benefits and perceived algorithmic equity, which are further linked to intention to click recommended items and intention to continuously use ARS (i.e., continuance intention). Second, social trust is negatively related to perceived risks, in turn reducing intention to click. Further, algorithm aversion weakens the negative association between social trust and perceived risk. Our study contributes to the ARS literature from the societal aspect, by validating the importance of social trust, demonstrating the key role of perceived algorithmic equity, and examining the trait of algorithm aversion as a moderator in the formation of algorithm-related perceptions. We also offer key practical suggestions for app developers and policymakers, related to algorithmic equity, transparency, and privacy risk of ARS management and regulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call