Abstract

This article makes two major contributions. First, it develops a methodology to investigate techno-social engineering of human beings. Many claim that technology dehumanizes, but this article is the first to develop a systematic approach to identifying when technologies dehumanize. The methodology depends on a fundamental and radical repurposing of the Turing test. The article develops an initial series of human-focused tests to examine different aspects of intelligence and distinguish humans from machines: (a) mathematical computation, (b) random number generation, (c) common sense, and (d) rationality. All four are plausible reverse Turing tests that generally could be used to distinguish humans and machines. Yet the first two do not implicate fundamental notions of what it means to be a human; the third and fourth do. When these latter two tests are passed, we have good reason to question and evaluate the humans and the techno-social environment within which they are situated. Second, this article applies insights from the common sense and rationality tests to evaluate the ongoing behavioral law and economics project of nudging us to become rational humans. Based on decades of findings from cognitive psychologists and behavioral economists, this project has influenced academics across many disciplines and public policies around the world. There are a variety of institutional means for implementing to improve human decision making in contexts where humans tend to act irrationally or contrary to their own welfare. Cass Sunstein defines nudges more narrowly and carefully as low-cost, choice-preserving, behaviorally informed approaches to regulatory problems, including disclosure requirements, default rules, and simplification. These approaches tend to be transparent and more palatable. But there are other approaches, such as covert nudges like subliminal advertising. The underlying logic of nudging is to construct or modify the choice or the environment within which humans make decisions. Yet as Lawrence Lessig made clear long ago, architecture regulates powerfully but subtly, and it can easily run roughshod over values that don’t matter to the architects. Techno-social engineering through (choice) architecture is rampant and will grow in scale and scope in the near future, and it demands close attention because of its subtle influence on both what people do and what people believe to be possible. Accordingly, this article evaluates nudging as a systematic agenda where institutional decisions about particular nudges aggregate and set a path that entails techno-social engineering of humans and society. The article concludes with two true stories that bring these two contributions together. Neither is quite a story of dehumanization where humans become indistinguishable from machines. Rather, each is an example of an incremental step in that direction. The first concerns techno-social engineering of children’s preferences. It is the story of a simple nudge, implemented through the use of a wearable technology distributed in an elementary school for the purpose of encouraging fitness. The second concerns techno-social engineering of human emotions — the Facebook Emotional Contagion Experiment. It is not (yet) a conventional nudge, but it relies on the underlying logic of nudging. Both can be seen as steps along the same path.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.