Abstract

ABSTRACT The rapid progress of artificial intelligence (AI) has resulted in its integration into various stages of the media process, including information gathering, processing, and distribution. This integration has raised the possibility of AI dominating the media industry, leading to an era of “autonomous driving” within AI-driven media systems. Similar to the ethical dilemma known as the “trolley problem” (TP) in autonomous driving, a comparable problem arises in AI automated media. This study examines the emergence of the new TP in fully automated AI-driven media (FAAIM), recognizing the complexity of this problem given the nature of media content and its societal impact. To address this complexity, we propose the adoption of the theory of “meaningful human control,” originally developed to address responsibility in the governance of lethal autonomous weapons systems, as a framework for governing FAAIM. By ensuring that humans can be held responsible for the operations of FAAIM, this paper aims to proactively confront the ethical challenges arising from FAAIM, identify potential solutions, and guide future research in media ethics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.