Abstract

Over the past decade, social media has gained an ever more pervasive presence in our lives. But the social connection, real-time dissemination of information, and creative outlet it provides come with a cost that is invisible to most users: in order to increase site value, many platforms are designed to addict users and trap them in a cycle of dependence that can ravage their mental health and well-being. One of the most effective ways for platforms to maximize their value is by using recommendation systems, sophisticated software systems that utilize artificial intelligence to learn about the user and predict what kinds of content will keep them scrolling, clicking, and liking. This Comment sketches out a legal solution for a subset of the harms caused by these algorithms by arguing that product liability concepts should extend to social media platforms, who have managed to achieve a nearly impenetrable form of immunity due to § 230 of the Communications Decency Act (CDA). The first Part of this Comment provides an overview of relevant concepts, including the recommendation engines that drive social media, the mental health effects of social media use, and the origins and purpose of the CDA. Next, this Comment demonstrates that social media platforms qualify as products for the purposes of strict product liability and walks the reader through a hypothetical example of the risk-utility test. Finally, this Comment considers § 230 of the CDA and offers two avenues for circumventing its broad grant of immunity. Instead of joining the extensive body of literature lamenting the woes of the CDA, this Comment proposes an affirmative solution that enables victims to recover while holding the multi-billion-dollar social media industry to a higher standard and incentivizing greater levels of caution in its development process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call