Abstract

Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment. We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human-algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call