Abstract

This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies. Self-tracking technologies are promoted and used as a means to self-improvement. Based on large aggregates of personal data and the data of other users, self-tracking technologies offer personalized feedback that nudges the user into behavioral change. The real-time personalization of choice architectures requires continuous surveillance and is a very powerful technology, recently coined as “hypernudging.” While users celebrate the increased personalization of their coaching devices, “hypernudging” technologies raise concerns about manipulation. This paper addresses that intuition by claiming that decisional privacy is at stake. It thus counters the trend to solely focus on informational privacy when evaluating information and communication technologies. It proposes that decisional privacy and informational privacy are often part of a mutually reinforcing dynamic. Hypernudging is used as a key example to illustrate that the two dimensions should not be treated separately. Hypernudging self-tracking technologies compromise autonomy because they violate informational and decisional privacy. In order to effectively judge whether technologies that use hypernudges empower users, we need both privacy dimensions as conceptual tools.

Highlights

  • New technologies that use our data in order to steer our behavior are often accompanied by worries about-manipulation

  • This paper explores and rehabilitates the value of decisional privacy as a conceptual tool, complementary to informational privacy, for critiquing personalized choice architectures employed by self-tracking technologies

  • Personalized feedback offered by self-tracking technologies could be interpreted as harmless Bnudges,^ as ways to scaffold a user’s autonomy by offering Ba form of choice architecture that changes the behaviour of people in a predictable way without forbidding any other options or changing their economic incentives^

Read more

Summary

Introduction

New technologies that use our data in order to steer our behavior are often accompanied by worries about (mass)-manipulation. Fuelled by real-time data, algorithms create personalized online choice architectures that aim to nudge individual users to effectively change their behavior. I conclude that self-tracking technologies that use hypernudging compromise a user’s autonomy, because they violate both informational and decisional privacy. Most self-tracking technologies are still at an early stage of development Their potential with regard to behavioral change and steering choices is growing along with the rapid progress that is made in real-time data processing, predictive analytics, and Big Data-driven (automated or guided) decision-making processes. The potential of behavioral change through self-tracking lies in highly personalized online choice architectures enabled by smart algorithms that learn from and adapt to the behavior of the user (Michie et al 2017). For the purpose of this paper, I criticize selftracking technologies that use Big Data-driven decision-making processes and are hosted by corporations and governmental institutions

Features of Nudging
Nudges Versus Hypernudges
Two Complementary Dimensions
Privacy and Autonomy
Dynamic Dimensions
Controlling Access to Information about Decisions
Controlling Interference with Decisions Based on Information
1.10 Three Objections
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call