Abstract

Intermediaries today do much more than passively distribute user content and facilitate user interactions. They now have near-total control of users’ online experience and content moderation. Even though these service providers benefit from the same liability exemption regime as technical intermediaries (E-Commerce Directive, Art. 14), they have unique characteristics that must be addressed. Consequently, many debates are ongoing to decide whether or not platforms should be more strictly regulated. Platforms are required to remove illegal content in the event of notice and take-down procedures built on automated processing and are equally encouraged to take proactive and automated measures to detect and remove it. Algorithmic decision-making helps to scale down the massive task of content moderation. It would, therefore, seem that algorithmic decision-making would be the most effective way to provide perfect enforcement. However, this is an illusion. A first difficulty occurs when deciding what, precisely, is illegal. Platforms manage the removal of illegal content automatically, which makes it particularly challenging to verify that the law is being respected. The automated decision-making systems are opaque and many scholars have shown that the main problem here is the over-removal chilling effect. Moreover, content removal is a task which, in many circumstances, should not be automated, as it depends on an appreciation of both the context and the rule of law. To address this multi-faceted issue, I offer solutions to improve algorithmic accountability and to increase the transparency around automated decision-making. Improvements may be made specifically by providing platform users with new rights, which in turn will provide stronger guarantees for judicial and non-judicial redress in the event of over-removal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call