Abstract

Social Networking Sites (SNSs) are commonly used to communicate and connect with people across physical boundaries all over the world. Beyond that, SNSs have become public platforms that are increasingly used for extensive self-disclosure. Disclosing the self on the Internet can entail beneficial outcomes such as getting appreciation or social support. Strikingly, self-disclosure on SNSs can cause privacy risks and negative outcomes, affecting different dimensions of people’s privacy, as well. From an interdisciplinary perspective, this work addresses the threat to people’s privacy in the ubiquitous and heterogeneous online world. Specifically, this dissertation examines possibilities of empowering users with regard to online privacy protection through utilizing technical privacy interventions, which are communicating current privacy risks to the users. Accordingly, users’ privacy needs, personal characteristics, and situational motivations to disclose or withdraw the self are considered. Taken together, this work investigated the impact of technical privacy interventions on users’ actual privacy behavior under the consideration of intrapersonal factors by means of four empirical studies. Users’ self-disclosure and self-withdrawal behavior on SNSs as well as their individual needs and requirements for technical privacy interventions were explored qualitatively in Study 1. A paradoxical relation between users’ desire for privacy and their mistrust in technical privacy interventions was revealed. In sum, Study 1 functioned as fruitful basis for the following studies that further investigated the qualitative findings regarding users’ privacy behavior and needs. Study 2 quantitatively assessed users’ attitudes toward an opting-out measure (super-logoff; i.e. self-withdrawal), concrete motives for opting-out (which revealed to be avoidance of pressure, protection from personal attacks, and avoidance of distraction), and their behavioral intention to opt-out. Data demonstrated positive relations between the intention to opt-out and each, corresponding attitudes, intentions, amount of self-disclosure, privacy concerns, and impression management motivation. Through an experimental study (Study 3), users’ actual privacy behavior was investigated within a non-artificial SNS environment after being exposed to persuasive privacy prompts either in a consensual or in an authoritarian style of communication (with varying degree of provided information within the prompts). The presence of persuasive privacy prompts was related to data parismony of participants. Persuasive interventions in a consensual style were more effective if less (compared to more) information was provided within the prompt, whereas the impact of interventions in an authoritarian style did not differ regarding high and low amount of information. Study 4 provided further evidence for the findings of Study 3 through showing that improved persuasive privacy interventions in a consensual style with a moderate amount of information and dynamic adaptation to the current privacy level (i.e. change in color of the privacy intervention depending on the amount of disclosed information) in an SNS environment was positively related to information withdrawal. Study 4 further demonstrated that for privacy-related decision-making (i.e. privacy calculus), the anticipated severity of a negative consequence of disclosing the self is a more decisive factor than the likelihood of its occurrence and anticipated benefits of self-disclosure. In both, Study 3 and Study 4, privacy behavior itself was influenced by specific intrapersonal factors whereas the impact of the privacy intervention was not influenced by individual characteristics. Overall, findings partly contradict prior research but provide valuable practical implications indicating that technical privacy interventions for online environments should focus on risk-communication through transmitting basic information regarding potential consequences of self-disclosure in a consensual style of communication. This dissertation contributes to the research field of online privacy by providing actual behavioral data as a response to technical privacy interventions that were designed alongside user requirements (derived from Study 1), and further investigated quantitatively with respect to intrapersonal factors. In addition, insights into the black box of the privacy calculus (Culnan & Armstrong, 1999), stressing the relevance of the severity of negative outcomes related to self-disclosure, are revealed. The findings of four empirical studies are discussed by drawing on the theory of planned behavior (Ajzen, 1991), the protection motivation theory (Rogers, 1975) and the privacy calculus (Culnan & Armstrong, 1999). In sum, this work reflects on the promising opportunities of utilizing technical measures for protecting users’ individual online privacy but also on its challenges with regard to the maintenance of users’ autonomy and self-determined – but privacy-aware – behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call