Abstract
Artificial intelligence technology is used to filter the visual content displayed on digital display platforms in a way that enhances its competitive role and organizes its content. It also involves several risks, including the possibility of causing direct damage to users of visual content display platforms via the Internet. The utilization of artificial intelligence technology for content filtering on these platforms gives rise to legal concerns regarding the liability framework in cases of damages resulting from such filtering activities. This is due to the absence of established legal regulations governing the use of artificial intelligence technology, as well as the ongoing development or nonexistence of relevant legal rules. Furthermore, the user base of these platforms continues to expand. The study proposes the adoption of a liability system that achieves a balance between the owners, operators, or developers of these platforms and their users. The responsibility of the stronger party arises as soon as the damage occurs. This type of responsibility is more suitable for the circumstances surrounding the employment of artificial intelligence tools in filtering visual content on digital display platforms via the Internet.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.