In mid-July 2023, officials from the Cyberspace Administration of China (CAC) joined a press conference to provide an interpretation of the Interim Measures for the Administration of Generative Artificial Intelligence Services (hereinafter referred to as the Measures). The Measures, which were jointly approved by CAC and six other ministerial departments, became effective from 15 August 2023. These represent the first departmental rules in China that apply to the supply and use of generative AI technology. The Measures consist of five chapters:

  • General rules
  • Technology development and governance
  • Stipulation on services
  • Supervision, inspection and legal liabilities
  • Supplementary provisions

To enhance the enforcement of the Measures, CAC addressed questions on the background, application scope, major principles, and key definitions underpinning the Measures. This also encompassed the approach to fostering the sound development of generative AI, regulations pertaining to providers and users of generative AI, AI services, governance, and mechanisms for complaints and reporting. For example, in terms of the background the Measures, CAC emphasized their role in addressing the challenges posed by recent technology advancement and the associated risks.

The Measures, in general, bear similarities to the Provisions on the Administration of Deep Synthesis of Internet-based Information Services (2022). Both documents, in fact, establish provisions for content generated via AI or related technology. The main distinction between the two rules is that the Measures primarily aim to guarantee the authenticity, accuracy, and objectivity of the content generated by generative AI; whereas the Provisions focus mainly on ensuring that synthetic content is appropriately marked and recorded as required. To a certain extent, the release of these two department rules indicates that China’s efforts in supervising AI and AI-related technology primarily focus on generated content. The specific goal of the supervision is to prevent the public from being misled by AI-generated content.

Currently, in terms of standardization, two standard projects supporting the Measures are being developed, although they have not yet been officially initiated:

  • Security specification for Generative AI Manual Labeling (supporting Article 8of the Measures)
  • Data security specification for generative AI pre-training and optimized training (support Article 7)

During the Standardization Week of SAC/TC 260 Information Security, the draft texts of the two standard projects were discussed and evaluated. Furthermore, more recently, SAC/TC 260 solicited public comments on the Cybersecurity Standard Practice Guide: Method for Identification for AI Generated Contents, which is a normative standardization document also supporting the Measures.