Abstract

Recent events, including the 2020 Presidential Election and the Insurrection of the U.S. Capitol, have shown us that social media can be used for darker purposes. Hate speech, fake news, and content inciting violence have become the unfortunate norm when scrolling through one’s newsfeed. Platforms have had to face the issue of dealing with objectionable content such as this. Should they leave it there? Should they get rid of it? How do they differentiate between what’s acceptable and what’s not? Are these decisions made consistently and accurately? The bigger questions have become whether social media platforms are removing enough material or removing too much. This Article address the two major methods that social media platforms have used to moderate objectionable content, including the many flaws associated with each. External legal factors including Section 230 and FOSTA-SESTA are discussed as potential motivators for the evolving social media moderating techniques. Additionally, this Article discusses the strengthening hold that app markets such as Apple, Amazon, and Google have over social media platforms and how these relationships directly influence how platforms police content. Finally, alternative methods of moderation are proposed and discussed in relation to the current moderating norms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.