Abstract

ABSTRACT Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call