Abstract

We live in a world of artificial speakers with real impact. So-called “bots” foment political strife, skew online discourse, and manipulate the marketplace. Concerns over bot speech has led prominent figures in the world of technology to call for regulations in response to the unique threats bots pose. Recently, legislators have begun to heed these calls, drafting laws that would require online bots to clearly indicate that they are not human. This work is the first to consider how efforts to regulate bots might run afoul of the First Amendment. At first blush, requiring a bot to self-disclose raises little in the way of free speech concerns—it does not censor speech as such, nor does it unmask the identity of the person behind the automated account. However, a deeper analysis reveals several areas of First Amendment tension. Bot disclosure laws fit poorly with the state’s stated goals, risk unmasking anonymous speakers in the enforcement process, and create a scaffolding for censorship by private actors and other governments. Ultimately bots represent a diverse and emerging medium of speech. Their use for mischief should not overshadow their novel capacity to inform, entertain, and critique. We conclude by urging society to proceed with caution in regulating bots, lest we inadvertently curtail a new, unfolding form of expression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.