Abstract

AbstractAs surveillance capitalism continues to collect data and automate decisions, those on the receiving end of such actions are often rendered speechless. We are less left in awe of the great force of these technologies than we lack the language that could properly define and defend against their onslaught. Often, we feel powerless as our data is extracted then aimed at us to manipulate our own choices, and yet the conversation around these practices seems stultified. Algorithms have shifted the technological landscape by co‐opting human decision‐making and in the process co‐opted the very tool that we use to respond to such changes—our ability to create language through individual action and collective reasoning.Humans deal with shifting technological landscapes by evolving their language. Language development starts with individual decisions that are then justified and presented to the group for judgment. As a result, moral frameworks develop, and over time legal language is shifted as common law courts and legislatures adopt new moral structures into legal code. Algorithms threaten to displace human decision‐making. Algorithms act, but unlike humans, they do not offer reasons. They halt the development of language by removing human decision‐making and cutting short the supply of decision‐rationales that fuel the conversation. The conversation can be reinvigorated by iterating and upgrading our language to apply to new situations and technological contexts.We apply this language theory to decisional privacy. Decisional privacy dictates that some realms of human decision, like sex and reproduction, so implicate private life that the government is held to be restrained by constitutional principles. These cases protect people’s capacity to make decisions and, by extension, our ability to provide and judge rationalizations for actions. Decided in the context of twentieth century totalitarianism, decisional privacy cases sought to prevent government inference in private life, but now the greatest threat to private life is corporate, not government. The context has shifted, and so too should our protections for private decisions. We offer the discussion of decisional privacy as a starting point in the conversation about moral constructions. That line of cases represents an acknowledgement of essential human values. People have the capability to make decisions regarding their home, family, health, and children. By preserving the essential ethic of these cases, we can preserve the process of decision‐rationale‐judgment that fuels our conversations, developing moral frameworks. Iterating the language of decisional privacy against instrumentarian corporate actors allows us to keep the conversation about algorithmic determinism open as we develop new ways of describing and defending against invasions into our private life.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call