Abstract

Abstract In the algorithmic society, personal privacy is exposed to ever-growing risks since the platform requires huge volumes of data for algorithm training. Globally, ordinary users, faced with the formidable platform and black-boxed algorithm, usually feel powerless against elusive privacy invasion and then have set about turning to third-party proxy institutions like the government and legislature to counterbalance the algorithmic privacy security framework. Starting from it, the present study examines what triggers users’ support for third-party proxy control, and a moderated serial mediation model has been estimated based on a Chinese cross-sectional sample (N = 661). Our research suggests that users’ algorithm awareness and their presumed algorithmic privacy risk to self and others (elders and minors) significantly predict their support, and serial mediating effects of the presumed algorithmic privacy risk can be more pronounced at the higher level of perceived effectiveness of platform policy. These findings help to identify the crucial role of algorithm awareness, which equips users to navigate risk and behave as responsible digital citizens, and also extend the influence of presumed influence model and the control agency theory in algorithmic contexts, making contributions in both theory and practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call