On September 14, the Cyberspace Administration of China (CAC) released the Draft Regulations on Labeling AI-Generated Content (referred to as the Labeling Regulations) for public consultation, along with a related mandatory national standard titled Cybersecurity Technology—Labeling Method for Content Generated by Artificial Intelligence (referred to as the Labeling Standard). These drafts are designed to further regulate the labeling of AI-generated content to protect national security, public interests, and the legitimate rights of citizens, organizations, and other entities.

Risks and Challenges in Governing AI-Generated Content

The rapid development and widespread application of generative AI technologies have boosted content creation efficiency but introduced new risks and challenges. As AI-generated content becomes increasingly realistic, it can be difficult for the public to distinguish between real and fabricated content, raising concerns about the spread of misinformation, deepfakes, and malicious applications. For instance, AI can create harmful content that damages individuals’ reputations or be used in scams involving AI-driven face-swapping technology. These issues not only infringe on citizens’ rights and disrupt social order but could also threaten national security. Effective governance of AI-generated content has therefore become an urgent priority.

In recent years, China’s Regulations on the Management of Deep Synthesis of Internet Information Services (hereinafter referred to as the Deep Synthesis Regulations) have already required providers of deep synthesis services to label the generated content. At the same time, the Interim Measures for the Management of Generative AI Services emphasize the labeling obligations of generative AI service providers. The Labeling Regulations and Labeling Standard further refine these requirements, aiming to address regulatory gaps in this emerging area.

Key Provisions of the Labeling Regulations and Standards

The central focus of the Labeling Regulations and Labeling Standard is to clarify the obligations for labeling AI-generated content, covering types of labels, labeling methods, and the roles and responsibilities of various stakeholders.

  1. Explicit and implicit labeling types:The Labeling Regulations differentiate between explicit and implicit labeling. Explicit labeling involves adding easily visible labels, such as text, sound, or graphics, on the interface where AI-generated content appears. This approach aims to inform users directly that the content is not human-created, thus reducing potential confusion. Implicit labeling involves embedding labels in the data file itself, which is less noticeable to users but can be extracted through technical means to ensure traceability.
  2. Detailed labeling guidelines for various content types:The Labeling Standard provides detailed guidelines for labeling different types of content, including text, images, audio, video, and interactive interfaces. These guidelines cover aspects such as form, placement, size, and color of labels, with examples to enhance practical implementation. For example, in addition to the five prominent labeling application scenarios mentioned in the Deep Synthesis Regulations, the Labeling Standard also covers emerging AI applications like text-to-image and text-to-video generation.
  3. Establishment of a comprehensive responsibility system:The Labeling Regulations lay out a full chain of accountability for labeling AI-generated content, specifying the responsibilities of various parties. Service providers must label AI-generated content both explicitly on the interface and implicitly as metadata within the content file. Users are required to declare and label AI-generated content when uploading it. Online content platforms are responsible for verifying that AI-generated content includes appropriate implicit labels, displaying prominent labels around such content, and reminding users to declare any AI-generated content upon publishing. Platforms distributing internet applications, such as app stores, are responsible for verifying that service providers include labeling functionality in their applications.
  4. Balancing regulation with innovation: The Labeling Regulations focus on applying only necessary regulatory obligations to service providers. In other words, it is mandatory for service providers to apply explicit labelsonlyin scenarios where there is a high risk of public confusion. The Labeling Regulations avoid “one-size-fits-all” restrictions, allowing service providers to offer AI-generated content without explicit labeling at a user’s request, provided there is a clear user agreement outlining labeling responsibilities and content usage.

These new measures reflect China’s approach to addressing the governance challenges associated with AI-generated content. By establishing comprehensive requirements, the Labeling Regulations and Labeling Standard aim to support a transparent and secure digital environment while accommodating the growth and innovation of AI technologies.

The comment solicitation for Labeling Regulations concluded on October 14, 2024, while that for the Labeling Standard ended on November 13, 2024. According to feedback collected by the European Union Chamber of Commerce from European companies, certain areas of the Labeling Standard could benefit from further clarification, particularly regarding the scope of the term “AI-generated and synthetic content.” For example, how should content that is partially AI-generated and partially original be labeled? Additionally, in the attribute section for implicit metadata identification, it would be helpful to provide guidelines for synthetic content providers and content dissemination platforms on how to categorize content as definite, possible, or suspected.