Abstract

With the continuous development of artificial intelligence (AI), algorithmic discrimination and discriminatory and misleading content (DMC) generated by AI have given rise to many negative effects in cyberspace, such as racial and gender discrimination, misinformation, and so on. The growing concern in society over AI governance urgently necessitates the establishment of an effective mechanism to supervise and govern AI-generated DMC. In this article, the discriminatory and misleading contents of AIGC (Artificial Intelligence Generated Content) were extracted according to Text Classification Model and then classified by Naive Bayesian algorithm. The results showed that under the Global Digital Compact (GDC), countries differed in their degrees of discrimination related to race, gender, religion, and age. The racial discrimination accounted for the highest proportion in the United States, with a score of 0.15; that in Britain and France took up a share of 0.06 and 0.07, respectively; and merely 0.03 in Germany. Discriminatory content of racial discrimination (M1) and gender discrimination (M2) in science and technology industry was relatively low, accounting for 0.05 and 0.08, respectively. Analyzing data within the Global Digital Compact (GDC) illuminates the disparities and trends in DMC generation across various countries, cities, industries, and individual users. This analysis provides valuable references for subsequent research and problem-solving initiatives under the compact. Furthermore, GDC plays a pivotal role in addressing issues related to AI-generated DMC, contributing significantly to the creation of a secure, reliable, and equitable cyberspace.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call