Abstract

The widespread use of Generative Artificial Intelligence in the innovation and generation of communication content is mainly due to its exceptional creative ability, operational efficiency, and compatibility with diverse industries. Nevertheless, this has also sparked ethical problems, such as unauthorized access to data, biased decision-making by algorithms, and criminal use of generated content. In order to tackle the security vulnerabilities linked to Generative Artificial Intelligence, we analyze ChatGPT as a case study within the framework of Actor-Network Theory. We have discovered a total of nine actors, including both human and non-human creatures. We examine the actors and processes of translation involved in the ethical issues related to ChatGPT and analyze the key players involved in the emergence of moral issues. The objective is to explore the origins of the ethical issues that arise with Generative Artificial Intelligence and provide a particular perspective on the governance of Generative Artificial Intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call