Abstract

Issues linked to the increasing presence of AI-generated content in people’s lives, and the importance of being able to effectively navigate and distinguish such content, are inherently linked to transparency, a notion that our study focuses on by evaluating Art. 50 of the AI Act. This article is a call for action to take the interests of end users into account when specifying AI Act's transparency requirements. It focuses on a specific use case – media organisations producing text with the help of generative AI. We argue that in its current form, Art. 50 leaves many uncertainties and risks doing too little to protect natural persons from manipulation or to empower them to take protective actions. The article combines documental and survey data analysis (based on a sample representative of the Dutch population) to propose concrete policy and regulatory recommendations on the operationalisation of the AI Act’s transparency obligations. Its main objective is to respond to the following question: how to reconcile AI Act’s transparency provisions applicable to digital news articles generated by AI with news readers’ perceptions of manipulation and empowerment?

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.