Abstract

AbstractOur lives are increasingly mediated, regulated and produced by algorithmically-driven software; often invisible to the people whose lives it affects. Online, much of the content that we consume is delivered to us through algorithmic recommender systems (“recommenders”). Although the techniques of such recommenders and the specific algorithms that underlie them differ, they share one basic assumption: that individuals are “users” whose preferences can be predicted through past actions and behaviors. While based on a set of assumptions that may be largely unconscious and even uncontroversial, we draw upon Andrew Feenberg’s work to demonstrate that recommenders embody a “formal bias” that has social implications. We argue that this bias stems from the “technical code” of recommenders – which we identify as a form of behaviorism. Studying the assumptions and worldviews that recommenders put forth tells us something about how human beings are understood in a time where algorithmic systems are ubiquitous. Behaviorism, we argue, forms the episteme that grounds the development of recommenders. What we refer to as the “behavioral code” of recommenders promotes an impoverished view of what it means to be human. Leaving this technical code unchallenged prevents us from exploring alternative, perhaps more inclusive and expansive, pathways for understanding individuals and their desires. Furthermore, by problematizing formations that have successfully rooted themselves in technical codes, this chapter extends Feenberg’s critical theory of technology into a domain that is both ubiquitous and undertheorized.KeywordsTechnical codeBehaviorismRecommender systemsFormal biasAndrew FeenbergB.F. SkinnerAlgorithmsData

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call