Abstract

Algorithms increasingly curate the information we see online, prioritizing attention and engagement. By catering to personal preferences, they confirm existing opinions and reinforce cognitive biases. When it comes to polarizing topics such as climate change or abortion rights, the combination of algorithmic information curation and cognitive biases can easily skew our perception and, thus, undermine our critical thinking abilities while creating a thriving ground for misinformation. To curb the spread of misinformation, a research agenda is needed around the interplay between cognitive biases, computing systems, and online platform design. In this article, we synthesize insights from a workshop series, propose a research agenda, and sketch out a blueprint for technologies to support critical thinking through the lens of human–computer interaction and design. We discuss the affordances of online media and how they could prioritize teaching users how to spot misinformation better and conduct themselves in online environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call