Abstract

Abstract This chapter examines the phenomenon of internet users attempting to report and prevent online child sexual exploitation (CSE) and child sexual abuse material (CSAM) in the absence of adequate intervention by internet service providers, social media platforms, and government. The chapter discusses the history of online CSE, focusing on regulatory stances over time in which online risks to children have been cast as natural and inevitable by the hegemony of a “cyberlibertarian” ideology. We illustrate the success of this ideology, as well as its profound contradictions and ethical failures, by presenting key examples in which internet users have taken decisive action to prevent online CSE and promote the removal of CSAM. Rejecting simplistic characterizations of “vigilante justice,” we argue instead that the fact that often young internet users report feeling forced to act against online CSE and CSAM undercuts libertarian claims that internet regulation is impossible, unworkable, and unwanted. Recent shifts toward a more progressive ethos of online harm minimization are promising; however, this ethos risks offering a new legitimizing ideology for online business models that will continue to put children at risk of abuse and exploitation. In conclusion, we suggest ways forward toward an internet built in the interests of children, rather than profit. Keywords Sexual abuse Social media Children Sexual exploitation Image-based abuse Justice Self-help Citation Salter, M. and Hanson, E. (2021), "“I Need You All to Understand How Pervasive This Issue Is”: User Efforts to Regulate Child Sexual Offending on Social Media", Bailey, J., Flynn, A. and Henry, N. (Ed.) The Emerald International Handbook of Technology-Facilitated Violence and Abuse (Emerald Studies In Digital Crime, Technology and Social Harms), Emerald Publishing Limited, Bingley, pp. 729-748. https://doi.org/10.1108/978-1-83982-848-520211053 Publisher: Emerald Publishing Limited Copyright © 2021 Michael Salter and Elly Hanson. Published by Emerald Publishing Limited. This chapter is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these chapters (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode. License This chapter is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these chapters (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode. Introduction The title of this chapter comes from a tweet made by Twitter user @AvriSapir (a pseudonym) on May 1, 2020, 1 in which she describes her efforts to have videos of her own child sexual abuse removed from Twitter. In this chapter, we examine the phenomenon of internet users attempting to report and prevent online child sexual exploitation (CSE) and child sexual abuse material (CSAM) in the absence of adequate intervention by internet service providers, social media platforms, and government. With reports of online CSAM to US authorities increasing by 50% per year for the past 20 years (Bursztein et al., 2019), it is now undeniable that the structure, administration, and regulation of online services and infrastructure have created a highly enabling environment for online CSE. We discuss the history of online CSE, focusing on regulatory stances over time in which online risks to children have been cast as natural and inevitable by the hegemony of a cyberlibertarian ideology that posits a factual and normative order in which it is not only impossible to regulate the internet but where such regulation is inherently authoritarian and unethical (Hanson, 2019). We illustrate the success of this ideology, as well as its profound contradictions and ethical failures, by presenting key examples in which internet users have taken decisive action to prevent online CSE and promote the removal of CSAM. Rejecting simplistic characterizations of “vigilante justice,” we argue instead that the fact that often young internet users feel compelled to act against online CSE and CSAM (CBC News, 2018; Pauls & MacIntosh, 2020) undercuts libertarian claims that internet regulation is impossible, unworkable, and unwanted. The chapter argues that scholars of online abuse and policymakers need to pay closer attention to the ways in which exploitative modes of technological design and administration and government inaction have been mystified by cyberlibertarianism and contributed to the contemporary crisis of CSE and CSAM. Recent shifts toward a more progressive ethos of online harm minimization are promising; however, this ethos risks offering a new legitimizing ideology for online business models that will continue to put children at risk of abuse and exploitation. In conclusion, we suggest ways forward toward an internet built in the interests of children, rather than profit. The History of Online Child Sexual Exploitation While technology companies have been vocal in their commitment to child protection, the history of online CSE shows that industry has been largely unwilling to prioritize child safety over profits, a posture that has been accepted and, arguably, tacitly endorsed by governments. The authenticity of industry and government expressions of surprise at escalating reports of online CSE and CSAM is undermined by evidence that the use of the internet by pedophiles has been known at the highest levels since the early days of networked computing. In 1986, the US Attorney General noted that the trade in CSAM had shifted online: “recently a significant amount of the exchange has taken place by the use of computer networks through which users of child pornography let each other know about materials they desire or have available” (US Attorney General, 1986, p. 407). Nonetheless, the approach of US legislators to internet regulation has been notoriously lax and oriented toward the growth and profitability of technology companies rather than child protection. This approach is exemplified in the passage of the Communications Decency Act (CDA) in 1996, a pivotal moment in the development of the modern internet. Section 230 of the CDA effectively immunized online service providers against legal liability for the content uploaded or provided by their users, paving the way for an internet whose consumer appeal and business model was based on the frictionless circulation of user's preferred content. The alignment of US legislators with the financial interests of the technology sector has created a powerful bloc that has dominated internet governance for a quarter of a century (Carr, 2015). A pervasive cyberlibertarianism played a major role in legitimizing an antiregulation ethos within industry and government despite recognition of the likely costs to children. Hanson (2019) defines libertarianism as a “distinct political stance and moral psychology whose guiding principle is the freedom of individuals, in particular from interference by the state” (p. 4), where concern for individual liberty from control and regulation is prioritized over altruistic moral values and responsibility to others. Libertarianism and new technologies emerged from the sociopolitical foment of the 1960s and 1970s as strange but intimate bedfellows and played a formative role in the culture and practices of Silicon Valley. From the 1970s, influential American counterculturalists came to believe that networked technology was the ideal instrument for personal liberation and alternative community building (Turner, 2010). They drew in particular on the libertarian rather than socialist and collectivist strains of the counterculture in ways that framed the developing internet as a transgressive, anarchist space, a new frontier full of possibilities and free of legal regulation. This characterization has been amplified in influential fictional and futurist portrayals of the internet as a parallel disembodied universe or “cyberspace” (Salter, 2017). As the internet and technology industries have taken center stage as global corporate behemoths, their marketing has enthusiastically adopted a countercultural style and promoted the view that their products are conducive to personal and collective freedoms. This view and its encompassing anarchist mystique have been prominent in media coverage and academic analysis of new technologies, promoting an idealized view of the lawlessness of the internet as both antiauthoritarian and radically democratic, despite it being anything but (Dahlberg, 2010). As the following section makes clear, cyberlibertarianism has mystified the monopolistic capture of online technologies by select corporate giants whose platforms closely regulate and manipulate user behavior (Zuboff, 2019, p. 109). By the late 1990s, it was evident that CSAM and CSE were expanding online at an exponential rate. The prevalence and severity of this material was such that even skeptics such as sociologist Philip Jenkins, whose prior research had argued that community concern over child sexual abuse was characterized by “moral panic” and overreaction, would declare CSAM an escalating and intolerable crisis (Jenkins, 2001). In 2002, as investigations into and prosecutions of CSAM in the United States underscored the seriousness of the problem, then-US Attorney General John Ashcroft held a meeting with the major technology companies calling them to action. In response, they formed an industry body called the Technology Coalition with the stated mission of “eradicating online child sexual exploitation” (Farid, 2017, para. 9). For 5 years, the Technology Coalition did not develop or deploy any effective technological solutions against the spread of CSAM. The Technology Coalition served instead to signal the concerns of technology companies to government and the public in the absence of measurable action or impact. 2 It was during this period of industry abeyance that major social media platforms were established and became the dominant players online. On social media, profit maximization depends on recruiting as many users as possible to circulate content and engage with each other as much as possible; whether this content and engagement is abusive or not does not impact the bottom line. Accordingly, social media platforms have a poor track record of addressing the specific needs and vulnerabilities of children or inhibiting sexual misconduct and coercion. Social media platforms have sought to elide their responsibilities to users by describing their platforms as neutral, apolitical “facilities,” comparable to the water system or electricity grid (Salter, 2017). In this model, the risk of online CSE and the availability of CSAM are positioned as a natural artifact beyond the control of any company or government. It was only in 2008 that Microsoft partnered with Professor Hany Farid to develop PhotoDNA technology, which enables the automatic matching of images against a database of known CSAM (Farid, 2017). This technology was then provided for free to appropriate organizations in order to identify and remove known CSAM. PhotoDNA technology is widely recognized as a major turning point in the fight against CSAM, and the determining factor in the dramatic numbers of CSAM currently being notified to US authorities. PhotoDNA made it possible, for the first time, to screen all images uploaded by users to ensure they were not sharing known CSAM. Nonetheless, it took large companies such as Google as long as five years before they were willing to implement PhotoDNA on their services (Farid, 2017). Furthermore, there has been a lack of significant industry investment in the further development and deployment of the technology. For example, PhotoDNA cannot detect new images of CSAM, nor can it scan video files; a significant drawback as reports of video CSAM are now more common than images (Dance & Keller, 2020). While a new tool, PhotoDNA for Video, was developed around 2018, the extent of its use across the sector is unclear. In 2018, both Google and Facebook launched technology designed to detect new CSAM images – this is a positive step forward although they still require screening by a human moderator, which is necessarily an expensive proposition for platforms and comes with significant risk of harm and trauma to content moderation teams (Gillespie, 2018). 3 The cost of underinvestment in human moderation is exemplified in the history of Tumblr, the social media platform and blogging site. In November 2018, the Tumblr app was removed from major online stores, effectively preventing new users from joining the platform. It subsequently emerged that the Tumblr app was removed due to the presence of CSAM on the platform (BBC News, 2018). While Tumblr used PhotoDNA to prevent the uploading of known CSAM to the site, an audit of Tumblr content identified that the platform was being used to circulate new CSAM that is undetectable to PhotoDNA (Silverstein, 2018). Such material can only be identified through human moderation. In December 2018, Tumblr announced that it was banning all pornographic content from the site, using an algorithm trained to automatically detect and delete photos with nudity (Liao, 2018). Users complained that the algorithm was producing a high level of false positives, with cartoons, dinosaur images, and even pictures of food wrongly flagged as “sensitive” content (Leskin, 2019). Within a year of the ban, Tumblr's unique monthly visitors decreased by more than 20%, and the site was sold in August 2019 for reportedly less than US$3 million, compared to its US$1.1 billion price tag in 2013 (Leskin, 2019). Some nongovernment organizations have been able to integrate PhotoDNA into highly effective software platforms that proactively detect CSAM and request removal, 4 and other technological developments will make it easier to identify offenders and victims from images. However, these efforts are typically driven by civil society rather than industry. Meanwhile, the public condemnations of online abuse by industry figures too often segue into calls for more parental responsibility and internet safety programs for children, which effectively devolve responsibility for child safety to parents, schools, and children. Necessarily, these strategies are most effective at reaching the least at-risk children, that is, the children with engaged parents who are regularly attending school. Research has consistently shown that the children who are most at risk of online abuse are those who are already vulnerable offline due to disadvantage and prior experiences of abuse and neglect (Jones, Mitchell, & Finkelhor, 2013). Furthermore, a significant proportion of CSAM is, in fact, created by parents and other family members; an inconvenient fact that has been consistently sidestepped by industry and government authorities for decades despite the cumulative evidence (Itzin, 2001; Salter, 2013; Seto, Buckman, Dwyer, & Quayle, 2018). The focus of industry and other voices on “educating” children and parents neglects the children who are most likely to be abused, while deflecting attention from those features of internet services that put children at risk and occluding corporate responsibility for the harms that are facilitated by their online products. Furthermore, this selective focus on education, with its frequent emphasis on the importance of children “keeping themselves safe online,” works to blame those who have been victimized (or go on to be) and reinforces the impact and messages of the abuse (Hamilton-Giachritsis, Hanson, Whittle, & Beech, 2017, p. 33). Nonetheless, these strategies remain consistent with the industry's preferred cyberlibertarian approach to CSE with a focus on individual risk and responsibility, even where those individuals are children. This approach is part of a broader rhetorical campaign aimed at responsibilization of targets of many other forms of technology-facilitated violence and abuse (see Henry and Witt, this volume; Marganski and Melander, this volume). User Regulation of Child Sexual Exploitation on Social Media Platforms The inevitable result of 30 years of deferring responsibility for online CSAM and CSE has been that, in 2018–2019, US authorities received 70 million reports of suspected CSAM (Dance & Keller, 2020). As of 2018, there was a backlog of millions of suspected CSAM images and videos in need of assessment while police reported being overwhelmed by the increase in cases and the increased volume and severity of CSAM in each case (ECPAT, 2018), and given reported increases in online CSE activity during the pandemic (INTERPOL, 2020) that backlog may well have expanded. While tech and social media companies accumulate billions in profits, CSAM victims and survivors report an almost total lack of access to affordable, effective mental health care or practical assistance with the ongoing impacts of abuse (C3P, 2017; Salter, 2013). As the scale of the crisis has become undeniable, governments are now shifting to a more interventionist posture (see also Henry and Witt, this volume). For example, the UK government has begun developing a legislative regime around “online harms” that aims to hold technology companies directly responsible for social and individual impacts (HM Government, 2019). This move initiated the global drafting and endorsement of a global set of “voluntary principles” to prevent online CSE for industry implementation as a precursor to formal government regulation. 5 In 2018, the United States enacted the Fight Online Sex Trafficking Act (FOSTA, 2017) which removed internet companies' Section 230 protections from liability if they are found to knowingly facilitate sex trafficking, and there is now a further bipartisan proposal to remove these protections from those deemed to be failing to act on online CSE and CSAM (Keller, 2020). Meanwhile, governments such as Australia have been encouraged to move away from a “coregulation” model with industry in recognition of industry failure to comply with internet safety principles (Briggs, 2018). This shift in the tone and approach of governments to the technology industry is also evident in academic scholarship. Over the last 10 years, celebratory academic accounts of the new possibilities of the internet and globalization have given way to more pessimistic assessments of the impact of the internet on inequality, cultural homogenization, and democratic legitimacy. This so-called “techlash” is now interrogating the monopoly power of the technology industries and their role in violating consumer privacy and circulating (and arguably promoting) malicious, deceptive, and illegal content (Hemphill, 2019). The “techlash” has come to encompass the issue of online CSE as an urgent priority. We are at a critical juncture where the cyberlibertarian posture of industry is being challenged rather than endorsed by governments, some technology companies are themselves asking to be regulated (Bloomberg, 2020), and the prevalence of online CSE and CSAM is such that it has become visible to everyday social media users. No longer the province of secret subcultures, CSAM is prevalent on social media sites, file sharing networks, and free adult pornography “streaming” services (Solon, 2020). The fact that these same sites and services do not have adequate measures in place to prevent CSAM circulation (notwithstanding statements of zero tolerance for CSAM) has never been more evident, leading to increasing expressions of concern about lack of accountability and transparency (Pauls & MacIntosh, 2020) and a rapidly changing policy environment, at both industry and government levels. 6 To illustrate the hypocrisies and contradictions of this historical moment, this section describes the efforts of social media and internet users to police CSAM and CSE on their platforms. In doing so, this section reveals two key facts. First, there are amoral consequences arising from cyberlibertarianism, in which corporate and government responsibility for the prevention of CSE and CSAM has been deferred to the point where internet users themselves are performing this basic civic function. Perhaps the most shocking illustration of this regulatory vacuum is that self-identified CSAM survivors, some only teenagers, are themselves active in seeking out and reporting images and video of their abuse, despite the psychological and legal risks this may entail (C3P, 2020a, pp. 4–5; CBC, 2018; Pauls & MacIntosh, 2020). Second, the section undercuts claims by some in the technology industry, certain privacy advocates, and some in government that the proactive detection of CSAM is difficult or impossible from a practical standpoint. The fact that self-organizing networks of social media users and researchers have (sometimes accidently) identified and interrupted the tactics of online abusers, as indicated in the examples below, suggests that the problem of CSE and CSAM regulation has been at least partially one of political and corporate will. There are multiple examples in which the efforts of users, rather than platforms, have been efficacious in identifying and publicizing the ways in which CSE is taking place on various services. Frequently, these efforts expose not only the presence and activities of child sexual abusers online but also the shortcomings of platform design that facilitate CSE and make reporting difficult. For example, YouTube is the popular online site in which users can make and upload their own video content. Despite stated policies against child abuse and exploitation, users have uploaded videos to YouTube of children in revealing clothing, and children restrained with ropes, sometimes crying or in distress. Some videos have remained online for years and accumulated millions of views before being removed, only after media reporting and public outcry (Warzel, 2017). In February 2019, YouTube user Matt Watson (who later faced criticism for his tactics and for content that he himself had posted (Alexander, 2019)) uploaded a viral video to YouTube documenting the way in which the YouTube “recommend” system – the machine learning process that automatically suggests and curates videos for users – was linking together self-created videos of young children engaged in activities such as dancing, doing gymnastics, and swimming (Orphanides, 2019). 7 Once YouTube detected a user preferentially seeking out and watching content of young children, the “recommend” system would then generate a playlist of similar content. In doing so, the algorithm was proactively curating videos of scantily clad children for those users who particularly enjoyed such content, that is, pedophiles (Kaiser & Rauchfleisch, 2019; Orphanides, 2019). In the same month, WIRED reported finding videos on YouTube relating to similar kinds of images with high numbers of views and comments seeming to show pedophiles using the “comment” function of YouTube to provide time stamps for parts of the videos where the child may inadvertently expose body parts, posting links to other provocative YouTube videos of children, or exchanging contact details with one another (Orphanides, 2019). WIRED reported that these videos had been monetized by YouTube, including preroll and banner advertisements (Orphanides, 2019). After all, the videos themselves were not illegal content; rather, it was the recontextualization of those videos by the YouTube recommend system that generated what Matt Watson 8 called a “soft-core pedophile ring” on the platform (Alexander, 2019). YouTube's initial response to the scandal was to delete accounts and channels of those leaving disturbing comments, report illegal conduct to police, turn off the “comment” function on many videos depicting minors and delete inappropriate comments, while employing an automated comment classifier to detect inappropriate comments with greater speed and accuracy (Wakabayashi & Maheshwari, 2019; see also; YouTube, 2019). In June 2019, The New York Times reported that three researchers from Harvard's Berkman Klein Center for Internet and Society had “stumbled upon” a similar issue while doing research about another topic on YouTube (Fisher & Taub, 2019). In all of these cases, action was seemingly only undertaken in the aftermath of significant reputation damage and the advertiser boycotts that followed the reports (Fisher & Taub, 2019). A critical point here is that this disturbing situation is a direct, though unintended, result of business models like YouTube's which seek to maximize profit by keeping users consuming online content in order to sell ads (Maack, 2019). In order to keep people consuming content, YouTube's algorithmic curation of user preference “promotes, recommends, and disseminates videos in a manner that appears to constantly up the stakes” (Tufekci, 2018, para. 6). Some have characterized this as a “rabbit hole effect” that can sometimes “lead viewers to incrementally more extreme videos or topics which are thought to hook them in” (Fisher & Taub, 2019). YouTube's 2020 Transparency Report provides an update on policies relating to child safety (YouTube, 2020). TikTok is the hugely popular music-based social media platform with a particular focus on teenage users. While TikTok states that users under the age of 13 are not permitted to use the platform, the app's age verification system can be bypassed by entering a false birthdate (Common Sense Media, 2021). TikTok provides short clips of popular music for users to video themselves miming and dancing to. As a result, the platform features videos of children performing to sexually suggestive or explicit songs. TikTok's default privacy settings had been criticized for being low, with many child users not adjusting these settings to make their accounts private or disallow contact from strangers (Broderick, 2019). As was pointed out, the platform's inherent incentives discourage increased privacy settings which would reduce the number of views and interactions with user content (BBC News, 2020). Journalists have identified an active community of TikTok users who appear to be soliciting nude images from children, while minor users are complaining about repeated solicitation for sexualized images (BBC News, 2020; Cox, 2018). As of 2018, some TikTok user profiles reportedly included open statements of interest in nude images and the exchange of sexual videos, including invitations to trade material via other apps (Cox, 2018). The presence of sex offenders on the app is further evident by reported instances of sexual comments left by men on videos of children (BBC News, 2020; Broderick, 2019). In response, networks of young users have been collecting those usernames who they accuse of sexual misconduct and sharing that information across social media platforms with the aim of shaming offenders and promoting safety on TikTok (Broderick, 2019). Concerned TikTok users have set up social media accounts that focus specifically on “creepy” TikTok interactions, and specific alleged offenders on TikTok have been widely discussed by users on various forums, alongside reports of lax or no response from TikTok to user reports and concerns (BBC News, 2020; Broderick, 2019). The fact that TikTok users are resorting to the self-policing of pedophile activity on the platform raises significant concern about the level of proactive regulation and monitoring of users who seek to engage in the sexual exploitation of children; however, at present, social media platforms are not obliged to publicly report their child protection standards and processes. In 2021, following considerable public pressure, TikTok amended its policies relating to users under 18, which included a change, so that the default setting for accounts created by users aged 13–15 is automatically set to “private” (Peters, 2021). In 2020, TikTok removed an Australian account that was purporting to “hunt” pedophiles on social media by posing as children and luring men into meetings, which were then filmed (Taylor, 2020). This account was part of a broader pattern of online vigilantes who seek to entrap child sexual abusers by posing as children on social media. This phenomenon has been met with a mixed reception. Australian police have urged people with concerns about online abuse to report to law enforcement (Taylor, 2020) and so have UK police; nonetheless, research by the BBC found that over half of UK prosecutions for grooming in 2018 drew on evidence gathered by online vigilante groups (BBC, 2019). Previous dismissive accounts of online vigilantism have given way to more nuanced analyses of civilian efforts to prevent internet sexual offending, which situate the so-called “pedophile hunting” within an increase in “citizen-led, digitally mediated security initiatives” operating alongside, and often with the cooperation, if not endorsement, of state police and the private sector (Hadjimatheou, 2019, p. 2). “Pedophile hunting” refers to the proactive luring and entrapment of suspected child abusers using social media (Hadjimatheou, 2019). In the cases described above, social media users are not seeking out sex offenders; they are not “vigilantes” in any meaningful sense. Instead, they are users who are reacting to the ubiquity of sexual misconduct and offending on popularly used platforms, and they are taking action in an attempt to improve their own and others' safety. Unlike “vigilante” groups, who frequently seek police inter

Highlights

  • The title of this chapter comes from a tweet made by Twitter user @AvriSapir on May 1, 2020,1 in which she describes her efforts to have videos of her own child sexual abuse removed from Twitter

  • We examine the phenomenon of internet users attempting to report and prevent online child sexual exploitation (CSE) and child sexual abuse material (CSAM) in the absence of adequate intervention by internet service providers, social media platforms, and government

  • With reports of online CSAM to US authorities increasing by 50% per year for the past 20 years (Bursztein et al, 2019), it is undeniable that the structure, administration, and regulation of online services and infrastructure have created a highly enabling environment for online CSE

Read more

Summary

Introduction

The title of this chapter comes from a tweet made by Twitter user @AvriSapir (a pseudonym) on May 1, 2020,1 in which she describes her efforts to have videos of her own child sexual abuse removed from Twitter. We examine the phenomenon of internet users attempting to report and prevent online child sexual exploitation (CSE) and child sexual abuse material (CSAM) in the absence of adequate intervention by internet service providers, social media platforms, and government.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call