Artificial intelligence-generated content (AIGC) technology has had disruptive results in AI, representing a new trend in research and application and promoting a new era of AI. The potential benefits of this technology are both profound and diverse. However, the benefits of generative tools are accompanied by a series of significant challenges, the most critical of which is that it may cause AI information pollution on social media and mislead the public. Traditional network security models have shown their limitations in dealing with today’s complex network threats, so ensuring that generated content published on social media accurately reflects the true intentions of content creators has become particularly important. This paper proposes a security framework called “secToken”. The framework adopts multi-level security and privacy protection measures. It combines deep learning and network security technology to ensure users’ data integrity and confidentiality while ensuring credibility of the published content. In addition, the framework introduces the concept of zero trust security, integrates OAuth2.0 ideas, and provides advanced identity authentication, fine-grained access control, continuous identity verification, and other functions, to comprehensively guarantee the published content’s reliability on social media. This paper considers the main issues of generative content management in social media and offers some feasible solutions. Applying the security framework proposed in this paper, the credibility of generated content published on social media can be effectively ensured and can help detect and audit published content on social media. At the operational level, when extracting key information summaries from user-generated multimodal artificial intelligence-generated content and binding them to user identity information as a new token to identify user uniqueness, it can effectively associate user identity information with the current network status and the generated content to be published on the platform. This method significantly enhances system security and effectively prevents information pollution caused by generative artificial intelligence on social media platforms. This innovative method provides a powerful solution for addressing social and ethical challenges and network security issues.
Read full abstract