Abstract

Intermediaries enjoy a ‘safe harbour’ from civil or criminal liability should they host illegal and harmful third-party online content, subject to requirements from national laws. The line between immunity and liability becomes hazy when intermediaries appear to assume the role of content creators, hence risk being characterised as publishers. The paper analyses whether such liability may be diminished if intermediaries adopt an artificial intelligence-based content moderation system. Through comparative case analyses of Mkini, Delfi, Bunt and Godfrey, the research questioned the relevance of the legal defence granted in the Communications and Multimedia Act 1998 and the Content Code. The article analysed the Federal Court’s finding of liability for Mkini and asked whether it signals the right way forward that may ignore the fundamental right to express opinion of public interest. Despite the advances in artificial intelligence-based content moderation, it remains to be seen if algorithms can easily contain illegal and harmful content.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call