On September 15, 2025, the National Technical Committee 260 on Cybersecurity (TC260) released the AI Safety Governance Framework 2.0 during the National Cybersecurity Week. Building on the 2024 edition (known as Framework 1.0), Framework 2.0 refines AI risk classification and introduces grading principles, further enhancing the governance measures across an entire lifecycle of AI technologies.
- Most Significant Changes
Framework 2.0 retains the overall structure of Framework 1.0 (Governance Principles, Safety Risks, Technical Measures and Safety Guidelines) but adds greater details in each section. These updates stem primarily from two developments: a more refined risk classification and the introduction of graded governance principles for AI safety risks. Both reshape related sections. Consequently, the most significant changes in Framework 2.0 lie in the refinement of safety risk classification and the introduction of graded governance principles.
- Expanded Risk Classification
Framework 1.0 divides AI safety risks into inherent and application risks, covering issues such as algorithmic flaws, data security, and ethical or cognitive challenges arising from AI use. Building on this foundation, Framework 2.0 introduces a third category – Derivative risks from AI applications – addressing the broader societal and environmental impacts of large-scale AI adoption, including job replacement, pressures on resource and energy, and ethical concerns such as bias amplification and threats to education and innovation.
- Introduction of AI Risk Graded Governance Principles
Framework 1.0 introduced only the concept of AI system grading. Framework 2.0 further introduces AI risk grading, evaluating AI risks based on application scenarios, level of intelligence, and application scale, thereby enabling more targeted safety measures. Five risk levels are identified: (1) Low risk, (2) Moderate risk, (3) Considerable risk, (4) Major risk, and (5) Extremely serious risk.
Introduction of graded governance principle will propel burgeoning development of new national and sector standards in support of the new framework. National standards such as Cybersecurity Technology – Classification and Grading Methods for Artificial Intelligence Application by TC260 are inviting experts to participate in the development.
- Adopting Full Life-Cycle Approach
Another major change is the shift from stakeholder-oriented to lifecycle-based safety guidelines. Framework 1.0 provides safety guidelines tailored to different stakeholders and recommends compliance considerations and corresponding measures each should take. Framework 2.0 revises the whole section and re-develops a set of safety guidelines that adopts a full lifecycle perspective of AI development and application. The new framework offers specific and detailed technical recommendations for each stage of the life cycle. New approach ensures AI safety governance is embedded throughout the entire lifestyle of AI system, minimizing governance gaps and ensuring consistency with the real-world process of AI product development and application.
Taken together, these updates demonstrate China’s intent to build a comprehensive AI safety governance ecosystem. Framework 2.0 deepens the regulatory architecture established in its predecessor.
Framework 2.0 is originally available in a bilingual Chinese and English version. Keen readers can click the link below to download the PDF document and find the English translation at the back of the Chinese version: CN-Bilingual-AI safety governance framework 2.0
Source of the announcement: https://www.cac.gov.cn/2025-09/15/c_1759653448369123.htm