Abstract

Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke (Philosophy and Technology, 35, 7, 2022) argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism (what I shall call ‘implied political labeling’) that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications (thus causing ‘looping effects’). Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call