Abstract

Abstract Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.

Highlights

  • Manipulation of truth is a recurring phenomenon throughout history.1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient Egypt

  • We observe that while Artificial intelligence (AI) systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem

  • We outline that the proposal allows the Commission to initiate the drawing up of crisis protocols “for addressing crisis situations strictly limited to extraordinary circumstances affecting public security or public health,”67 in order to “coordinate a rapid, collective and cross-border response in the online environment.”68 It specifies that “[e]xtraordinary circumstances may entail any unforeseeable event, such as earthquakes, hurricanes, pandemics and other serious cross-border threats to public health, war and acts of terrorism, where, for example, online platforms may be misused for the rapid spread of illegal content or disinformation or where the need arises for rapid dissemination of reliable information.”69

Read more

Summary

Introduction

Manipulation of truth is a recurring phenomenon throughout history. Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient Egypt. Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled by legislative and technical means.. Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled by legislative and technical means.7 This choice of focus has to do with the manipulative character of disinformation, along with the importance of protecting fundamental rights and freedoms, especially freedom of expression and information.. AI techniques are generating new opportunities to create or manipulate texts and image, audio or video content. It is possible for anyone willing to deceive or mislead individuals, to manipulate the truth in two effective ways: fake content can be passed off as real and authentic information can be passed off as fake

AI techniques present on the web boost the dissemination of disinformation
Ethical implications
AI Techniques As a Way to Tackle Disinformation Online
The EU Regulation of Disinformation
The Commission’s omissions regarding important issues
A more positive approach by the EU Parliament and the Council of the EU
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call