The rapid development of generative artificial intelligence (AI) has attracted global attention and posed challenges to existing data governance frameworks. The increased technical complexity and expanded scale of data usage not only make it more difficult to regulate AI but also present challenges for the current legal system. This article, which takes ChatGPT's training data and working principles as a starting point, examines specific privacy risks, data leakage risks, and personal data risks posed by generative AI. It also analyzes the latest practices in privacy and personal data protection in China. This article finds that while China's governance on privacy and personal data protection takes a macro-micro integration approach and a private-and-public law integration approach, there are shortcomings in the legal system. Given that the current personal data protection system centered on individual control is unsuitable for the modes of data processing by generative AI, and that private law is insufficient in safeguarding data privacy, urgent institutional innovation is needed to achieve the objective of “trustworthy AI.”
Read full abstract