This article looks into the ethical issues raised by AI-generated content, focusing on ‘sensitive’ topics like school shootings. As AI technologies progress, there is a greater risk that such information can accidentally reinforce negative narratives, glorify acts of violence, or cause psychological damage to victims and their communities. The study tackles these concerns by estimating the existing ethical frameworks and finding their limitations in dealing with these complicated situations. A main goal of the research is to create a refined set of ethical principles specifically geared to address the risks connected with AI-generated information about school shootings. The paper contains actual experiments using AI models such as ChatGPT, Claude, GigaChat, and YandexGPT to generate and analyze information about school shootings. These experiments highlight important issues in ensuring that AI-generated texts do not reinforce negative themes or cause suffering. For example, while some models, such as GigaChat, declined to generate content on sensitive themes, others, such as ChatGPT, created elaborate texts that risked retraumatizing readers or praising offenders. The findings show that, while current frameworks take into consideration basic concepts such as transparency, accountability, and fairness, they frequently lack precise direction for dealing with difficult issues. To close this gap, the suggested ethical framework incorporates particular content development criteria, stakeholder participation, responsible dissemination techniques, and ongoing research. This paradigm prioritizes the protection of vulnerable people and the prevention of psychological injury.
Read full abstract