Abstract

Abstract Moderating terrorist live-streaming presents legal and technical trade-offs. The immediacy of live-streamed content requires a timely assessment of the content’s compliance with community guidelines and an even more expeditious action of restriction or removal in case of portrayal of illegal content. Social media companies have heavily relied on technology to moderate content. The most frequently used tools to screen and filter content and ensure compliance with community standards and regulations include hashing technology, video-fingerprinting, natural language processing and metadata analysis. Terrorist content online presents nuances that make the task of removing it particularly challenging especially when companies must rely on machine learning content moderation. We identify three trade-offs in regulating content moderation of live-streamed content: technological neutrality v specificity, explainability v adversarial machine learning, and accuracy v time-efficiency. For each trade-off, we provide an analysis of the considerations lawmakers and regulators must take into account when balancing competing interests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call