The Danger of Contextual Integrity
Contextual integrity has now become a (the?) dominant academic theory of privacy. It identifies privacy as both complex and social, two alluring attributes that other leading theories reject. Scholars who engage contextual integrity mostly do so only to convey their confidence in it as their working framework. Even passingly critical notes are rare. This article offers a legal realist critique: Were contextual integrity adopted as a legal standard, it would undermine the very values it was intended to protect, systematically favoring data-hungry corporations at the expense of an already shrinking zone of protected individual privacy. Contextual integrity is dangerous precisely because of the complexity and sociality that draw so many scholars to it. In an adversarial courtroom that pits corporate data interests against aggrieved individuals, these theoretical virtues favor the more sophisticated, well-funded, repeat player.
- Research Article
10
- 10.3233/shti190166
- Jan 1, 2019
- Studies in health technology and informatics
My Health Record (MyHR) is Australia's national personally-controlled electronic health record. Initially established in 2012, it moved from an opt-in to an opt-out system in 2018. This paper considers the privacy aspects of MyHR shared health summary. Drawing on Nissenbaum's theory of privacy as contextual integrity, we argue that the shift in the event-specific nature of information sharing leads to MyHR breaching contextual integrity. As per Nissenbaum's decision heuristic for contextual integrity, we evaluate this breach through a reflection on the changing nature of health care, including patient empowerment, and the greater complexity of care. It is evident that more needs to be known about the benefits of shared health summaries, as well as the actual use of MyHR by clinicians and patients. Though we focus on MyHR, this evaluation has broader applicability to other national electronic health records and electronic shared health summaries.
- Research Article
38
- 10.1016/j.ins.2015.07.013
- Jul 11, 2015
- Information Sciences
Implicit Contextual Integrity in Online Social Networks
- Research Article
- 10.1111/josp.12504
- Dec 21, 2022
- Journal of Social Philosophy
Social pathologies of informational privacy
- Book Chapter
1
- 10.1007/978-981-13-1165-9_7
- Sep 29, 2018
Snapchat, by its ephemeral nature, has always portrayed itself as a service in which users can securely send messages that can vanish after viewing. The research examined Snapchat’s recent updates in light of Helen Nissenbaum’s theory of privacy as “contextual integrity.” Users’ profiling, replay feature, third-party apps tracking, and privacy policy have serious violation to the information and distribution norm that is considered a breach which results in contextual integrity being violated. There are many questions about the alleged false sense of privacy Snapchat is publicizing since users can only have a very low expectation of privacy in any electronic messaging. Snapchat have been accused of denying its users even the most basic privacy protection by failing to provide an adequate level of encryption (end-to-end) as a default. Privacy issues identified could be tackled by making a better job in its architectural design decisions.
- Research Article
3
- 10.1016/j.clsr.2021.105565
- Jun 18, 2021
- Computer Law & Security Review
Analysis of the attributes of rights to inferred information and China's choice of legal regulation
- Research Article
7
- 10.3233/ip-2011-0257
- Dec 24, 2011
- Information Polity
This article presents a critical review of Helen Nissenbaum's 'Privacy In Context'. Nissenbaum's book is set to become a seminal work both for privacy scholars around the world and for many other researchers who increasingly deal with privacy-related topics. Indeed, this is a much-awaited book, advancing the theory of privacy as ‘Contextual Integrity’ presented in an often-quoted article back in 2004. Moreover, the idea of Contextual Integrity proves generally very seductive for its claim to offer a break from academic debates that have reached a ‘deadend’ and its ambition to return to everyday problems of an ‘information age society’, promising a way-out or at least a decisional matrix. All these elements, and many more that will be discussed below, justify the acclaimed status of Nissenbaum’s work. Nevertheless, when reading the book from a different point of view, both in terms of research interests and disciplinary posture, some elements of the ‘Contextual Integrity theory’ become less innovative and more debatable. For this reason, this review tries to presents a synthetic but fair description of the theoretical framework developed by Nissenbaum, pinpointing the most interesting choices and insights. Then, it advances three remarks to the attention of researchers dealing with privacy (and data protection) in Europe. Finally, it attempts to raise a more substantial critical objection to the framework itself.
- Research Article
2
- 10.1007/s44206-023-00085-9
- Dec 1, 2023
- Digital Society
We aim to bring both digital pathology in general and computational pathology in particular within the scope of Helen Nissenbaum’s theory of appropriate information transfer as contextual integrity. In Section 1, the main lines of the theory of contextual integrity are introduced, and reasons are given why it is not properly speaking a theory of privacy, but rather a theory of morally permissible information transfer in general. Then the theory is applied to uses of digitised pathology images for (a) patient-by-patient analysis (Section 2); and (b) computational pathology (Sections 3 and 4). Although big data exercises involving personal data are sometimes seen by Nissenbaum and colleagues as particular threats to existing data-sharing norms and other social norms, we claim that patient-by-patient digital pathology is riskier, at least in forms it has taken during the pandemic. At the end, we consider some risks in computational pathology that are due to the interaction between health institutions, particularly in the public sector, and commercial algorithm developers.
- Book Chapter
1
- 10.1007/978-3-030-82786-1_5
- Jan 1, 2022
This chapter addresses how we develop, revisit, and negotiate norms around privacy when confronted with new technologies. The chapter first examines Nissenbaum’s (Washington Law Review 79(1):119–157, 2004) theory of privacy as contextual integrity, a framework that helps unpack how context-relevant norms for appropriateness and transmission can be challenged by new technologies. It then reviews how social norms develop as we build mental models of how a technology works during its diffusion process. The chapter concludes with suggestions for designers about approaches for thinking through implications when a design may challenge a preexisting social norm, or where there is no socially agreed upon norm. This includes careful reflection on who challenges to the current social norms may benefit and who they may hurt.
- Research Article
4
- 10.1177/14614448231213267
- Nov 28, 2023
- New Media & Society
This study analyzes the meanings and technical mechanisms of privacy that leading advertising technology (adtech) companies are deploying under the banner of “privacy-preserving” adtech. We analyze this discourse by examining documents wherein Meta, Google, and Apple each propose to provide advertising attribution services—which aim to measure and optimize advertising effectiveness—while “solving” some of the privacy problems associated with online ad attribution. We find that these solutions define privacy primarily as anonymity, as limiting access to individuals’ information, and as the prevention of third-party tracking. We critique these proposals by drawing on the theory of privacy as contextual integrity. Overall, we argue that these attribution solutions not only fail to achieve meaningful privacy but also leverage privacy rhetoric to advance commercial interests.
- Research Article
55
- 10.5210/fm.v25i11.11095
- Oct 6, 2020
- First Monday
The COVID-19 global pandemic led governments, health agencies, and technology companies to work on solutions to minimize the spread of the disease. One such solution concerns contact-tracing apps whose utility is tied to widespread adoption. Using survey data collected a few weeks into lockdown measures in the United States, we explore Americans’ willingness to install a COVID-19 tracking app. Specifically, we evaluate how the distributor of such an app (e.g., government, health-protection agency, technology company) affects people’s willingness to adopt the tool. While we find that 67 percent of respondents are willing to install an app from at least one of the eight providers included, the factors that predict one’s willingness to adopt differ. Using Nissenbaum’s theory of privacy as contextual integrity, we explore differences in responses across distributors and discuss why some distributors may be viewed as less appropriate than others in the context of providing health-related apps during a global pandemic. We conclude the paper by providing policy recommendations for wide-scale data collection that minimizes the likelihood that such tools violate the norms of appropriate information flows.
- Conference Article
- 10.54941/ahfe1007067
- Jan 1, 2026
Large language models (LLMs) introduce new opportunities in residential care, including the potential to assist with care documentation. However, if introduced unreflected, such technologies present challenges and potential harms to privacy and personal integrity. In this paper, we present a framework for automated filtering of privacy-sensitive content from LLM-supported care documentation. Our framework is based on Nissenbaum's theory of privacy as contextual integrity. As an initial step, we present the generation of a synthetic dataset derived from privacy-sensitive interactions between care workers and care recipients in the real world. We analyze the conversations by privacy categories and show that both care recipients and care workers are affected. Our contributions include a methodology for generating privacy-preserving synthetic datasets and insights into the content requirements of a dataset for fine-tuning an LLM to detect privacy-sensitive segments. In addition, we show that value-sensitive design can result in innovative approaches to creating technology that is safe, meaningful, and protective of important human values.
- Research Article
- 10.1111/bjet.13377
- Aug 29, 2023
- British Journal of Educational Technology
Postsecondary institutions have a legal responsibility to ensure that students have access to a safe learning environment. While institutions adopt policies and hire administrators to protect students from harm, many are underprepared to support students when these harmful incidents happen online. This is of increased concern now that online aggression is pervasive across universities worldwide. While faculty, administrators and students agree that online aggression is a significant issue and that institutions ought to provide prevention and response services, there is concern that these efforts might violate privacy norms. We used the theory of privacy as contextual integrity (CI) to explore the tensions that postsecondary students and staff perceive regarding student privacy when responding to incidents of online aggression. To do so, we conducted focus groups with undergraduate students and student affairs administrators from a Historically Black College and University (HBCU) in the Mid‐Atlantic USA. Our analysis surfaced three considerations that inform students' and staff's decision to report an incident of online aggression: their closeness to the person making the post, their perception of the online post content as a real threat and their knowledge of an authority figure who could help resolve the situation. We used CI theory to explain how these considerations can inform institutional policy, practice and future research. Practitioner notesWhat is already known about this topic Online aggression is a pervasive issue at postsecondary institutions worldwide that can contribute to psychological, academic and developmental issues. Postsecondary students and staff are unsure of how to respond to incidents of online aggression. There is a gap in policies and procedures for responding to online aggression at postsecondary institutions. What this paper adds A novel use of Nissenbaum's (2010) theory of contextual integrity to understand students' and staff's perceptions of privacy. Students' and staff's decisions to intervene or report an online aggression incident are determined by their relationship to the perpetrator, the severity of the social media post and their knowledge of who to tell on campus. Students and staff are reluctant to inform the police out of fear of violence against the perpetrator. Implications for practice and/or policy Raise awareness about responding to online aggression incidents. Implement online bystander intervention training programs to increase awareness and self‐efficacy to intervene in unclear situations. Develop clear policies regarding online aggression, as well as a trustworthy procedure for how to respond.
- Book Chapter
1
- 10.1201/9781003278290-18
- Mar 16, 2022
Recent media revelations have demonstrated the extent of third-party tracking and monitoring online, much of it spurred by data aggregation, profiling, and selective targeting. This chapter presents an alternative approach, rooted in the theory of contextual integrity. The year 2010 was big for online privacy. Reports of privacy gaffes, such as those associated with Google Buzz and Facebook's fickle privacy policies, graced front pages of prominent news media. The chapter explores present-day concerns about online privacy, but in order to understand and explain on-the-ground activities and the anxieties they stir, it identifies the principles, forces, and values behind them. It lays out an alternative approach to addressing the problem of privacy online based on the theory of privacy as contextual integrity.
- Research Article
- 10.1177/13548565251334483
- Apr 17, 2025
- Convergence: The International Journal of Research into New Media Technologies
This essay uses the theory of privacy as contextual integrity together with critical research on the role of media in democracies to critique platform surveillance and a related process that I call direct marketization. It focuses on the case of advertising attribution, a paradigm of audience and marketing measurement that attempts to determine advertising effectiveness by observing people as both media audiences and marketplace consumers. Advertising platforms have recently promised to implement ‘privacy-preserving’ methods of attribution measurement. The paper argues that these efforts to legitimize attribution make implicit claims about the values and purposes of media systems. It then introduces the concept of direct marketization to explain how these claims relate to shifts in the social and institutional norms of ad-supported media. The analysis exposes direct marketization and advertising attribution, which both fuel and depend on platform surveillance, as contradictory to the contextual integrity of democratic media.
- Research Article
3
- 10.1386/qsmpc.2.1.29_1
- Mar 1, 2017
- Queer Studies in Media & Popular Culture
In the 2010 book The Facebook Effect by David Kirkpatrick, Facebook founder Mark Zuckerberg made the claim that ‘users have one identity’. Through a critical theoretical analysis of a series of case studies of people being outed by Facebook, this article argues that ‘one identity’ and Facebook’s use of algorithms to drive profits are fundamentally incongruous with prevailing intersectional scholarship. The case studies articulate a theoretical framework that ties intersectional conceptions of gender and sexuality to social media and privacy. By aligning intersectional and privacy theories, the article argues that ‘one identity’ constitutes a violation of privacy norms as conceptualized by Helen Nissenbaum’s framework of contextual integrity. The article concludes that Facebook is anathema to the privacy and real life experiences of its users, which cannot fit into static categories and which change over time, mitigating the potential for the performance of fluid and intersectional identities.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.