On March 14, China released Measures for the Labeling of Content Generated by Artificial Intelligence (hereinafter referred to as “the Measures”). The Measures are jointly developed and issued by the Cyberspace Administration of China (CAC) and the Ministry of Industry and Information Technology (MIIT) and will take effect on September 1, 2025.
Since 2022, China has published Provisions on the Administration of Deep Synthesis of Internet-based Information Service(2022) and Interim Measures for the Administration of Generative Artificial Intelligence Service(2023), along with its Cybersecurity Law (2017), the country has established its AI-generated content governance framework. A shared similarity among the three legal documents is their emphasis on having a labeling system to help the public and network service providers differentiate between content generated by AI and humans. The release of the Measures in 2025 is a response to this calling, offering detailed provisions that put labeling methods into real-life practice.
The Measures introduce a dual-labeling system of explicit and implicit labeling for AI-generated content. Its primary target is network information providers whose products and services involve AI-generated content. The measures have designated responsibilities for different network information providers to fulfill.
Generally, service providers must display visible marks on any AI-generated content and embed machine-readable metadata containing content attributes, provider identification, and unique content codes. Content distribution platforms bear specific verification responsibilities, including checking for proper labeling, adding appropriate warnings when redistributing content, and providing user tools for self-identification. The Measures also touch on online application matters. Online application distribution platforms should verify if the application contains AI-generated content or such services and examine if the application complies with the mandatory labeling requirements in a new national standard issued before the Measure.
Before the release of the Measures, GB 45438-2025 Cybersecurity Technology – Labeling Method for Content Generated by Artificial Intelligence (hereinafter referred to as “Labeling Method”) was published on February 28, 2025. It is a mandatory national standard. This standard document took reference from GB 18030-2022 Information Technology – Chinese coded Character Set. It was proposed and is under the jurisdiction of the Central Cyberspace Affairs Commission Office.
The Labeling Method provides a detailed definition of the explicit and implicit labeling:
- Explicit labeling refers to identifiers added to AI-generated content or interactive interfaces, presented in forms such as text, audio, or graphics (including static graphics, videos, virtual and interactive scenes) that can be perceived by users. Its primary purpose is to notify the public that the content is generated or synthesized by AI.
- Implicit labeling refers to identifiers embedded within the file data of AI-generated content through technical measures, which are not easily noticeable to users. Its primary purpose is to record relevant information about the generated or synthesized content.
Moreover, this standard has specified the format, components, placement, color, and clarity requirements of the explicit label. Explanations on how to embed implicit labels into the file’s data and record security protection information such as identifier integrity and content consistency are also included in the document. The standard has provided visual examples for service providers to understand the actual implementation of this new mandatory national standard.
To allow enterprises to fully understand and implement this labeling method, the Measures give them a six-month transition period. Once the Measures come into effect officially in September, a review of labeling will be a key focus area for relevant regulatory bodies. However, as the first set of the labeling method in this country, the direction of the measures may not only stay at the differentiation of content. The Measures may undergo further adjustments by incorporating lessons and experience learned along the implementation process. European stakeholders are advised to keep a close watch on its changes and provide feedback to SESEC during consultation.