Abstract

It is widely accepted notion around the world that online intermediaries are immune from civil or criminal liability for hosting illegal or harmful third-party content, provided they comply with legal obligations set out by law through measures such as takedown. Due to advancements on the internet, their roles have expanded from mere intermediaries into publishers at the same time. Consequently, it invites confusion over an intermediary's true status and risking injustice through a ‘one size fits all’ inference. This was the scenario that the Malaysian Federal Court had to address in the MKini case in 2021, which established a new narrative on internet intermediary liability for Malaysia. The objective of this article is to investigate whether a publisher or content creator acting as an intermediary should be held liable when a third party posts illegal or harmful content on their platforms. Through a comparative analysis of the cases of Mkini, Delfi and Bunt we called into question the legal protection provided by the Malaysian Communications and Multimedia Act 1998 and the Content Code, which guaranteed such immunity to their subjects. We propose for the possibility of a publisher utilising artificial intelligence (AI) as an effort to mitigate the liability for illegal third-party publication on its platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call