Generative AI with stronger responsiveness and emergent abilities has triggered a global boom and is facing challenges such as data compliance risks during the pretraining process and risks of generating fake information, which has raised concerns among global regulatory authorities. The European Union, United States, United Kingdom, and other countries and regions are gradually establishing risk-based, scenario-based, and outcome-based governance models for generative AI. China recently introduced new regulations for the management of generative AI, which adopt a governance model focusing on generative AI service providers. It suggests that China is continuing the principle of primary responsibility in Internet governance, which encompasses legal responsibility, contractual obligations, and ethical responsibility. However, the governance model based on primary responsibility emphasizes the accountability of generative AI model service providers, with relatively limited regulation on other important entities such as users and large-scale dissemination platforms, which may not be conducive to achieving China's regulatory goals for the AI industry. In comparison, the Meta-Regulation model could be an ideal alternative for China. As a classic theory explaining the public-private relationship, the ‘Meta-Regulation’ aligns with the generative AI governance requirements. Based on the Meta-Regulation theory, the governance of generative AI in China should move towards a direction of emphasizing safety, transparency, collaborative governance, and accountability. In line with this, it is necessary to include users and large-scale dissemination platforms within the regulatory scope and establish overarching governance objectives that ensure the responsible distribution of duties among stakeholders, with regulatory authorities assuming ultimate oversight responsibility and technical coordination. At the level of specific improvement measures, it is possible to integrate the three stages of model development, usage, and content dissemination of generative AI. During the model development stage, generative AI providers have specific transparency obligations. In the usage stage, a self-regulatory system centered around platform autonomy should be constructed. In the content dissemination stage, the proactive notification obligations of the dissemination platforms should be clearly defined. Additionally, the enforcement of technical interoperability requirements is necessary, thereby promoting the orderly development of generative AI applications.