On 2 April 2026, the Ministry of Science and Technology (MOST) and the Ministry of Industry and Information Technology (MIIT) jointly issued the Measures for the Administration of Review and Services of Artificial Intelligence Technology Ethics(Trial) (hereafter “the measures”), with 8 other government departments (a full list of issuing authorities is annexed at the end of this article). MOST leads the overall coordination of science and technology governance, while MIIT and other authorities are responsible for the implementation and supervision of AI-related ethics governance.
The measures establish a dedicated ethics governance framework for all artificial intelligence(AI) activities, covering a broad spectrum of stakeholders. This framework adopts a human-centric, lifecycle-based approach and adheres to the principles of fairness and non-discrimination, openness and transparency, trustworthiness and controllability, and privacy and data protection. It also highlights the need to further develop and complete a supporting standards system.
Review Scope and Responsible Entities
The measures apply to all AI-related activities such as scientific research and technological development conducted within China that may pose ethical risks to human dignity, public order, life and health, ecological environment and sustainable development. This means the responsible entities who need to comply to the ethics review involve universities, research institutes, healthcare institutions, enterprises, and so on. These entities bear primary responsibility for ensuring compliance with ethics review requirements. They are required to establish internal AI ethics review committees composed of multidisciplinary experts (e.g. in AI technologies, applications, ethics, and law), typically involving no fewer than 5 members. Where such internal capacity is lacking, entities may entrust qualified third-party service centers to carry out ethics review activities.
Review Mechanisms
The ethics review committees or authorized service centers are responsible for examining project applications, assessing ethical risks, and issuing review decisions. The review process generally includes application submission, acceptance, ethical assessment, decision-making, and post-approval monitoring. Approved projects remain subject to follow-up reviews, typically at intervals not exceeding 12 months.
A key feature of the measures is the introduction of a risk-based approach. Certain high-risk AI activities are subject to an extra layer of expert review. These include systems that significantly influence human behavior, psychological states, or health; AI applications capable of influencing public opinions or social mobilization; and highly autonomous decision-making systems deployed in safety-critical scenarios. For such cases, a second-level review organized by competent authorities is required before a final decision is made. with the capacity to shape public opinion, and highly autonomous decision-making systems deployed in safety-critical contexts.
While China’s high-risk AI categories conceptually resemble the EU’s risk-based approach under the AI Act, they do not constitute a formal classification system. Instead, they function as a procedural trigger for enhanced ethics review, with broader and more principle-based definitions.
Position within Current Regulatory Frameworks
In Oct 2025, China’s Cybersecurity Law passed the final amendment and was officially promulgated. The amendment includes new provisions on artificial intelligence to acknowledge its key role in developing more advanced cybersecurity technology while ensuring the safety bottom line and preventing unethical use of AI. The new inclusion here is calling for a more specific and on-the-ground mechanism to realize the ethical use of AI.
The measures are designed to complement, rather than duplicate, existing AI governance mechanisms. Where AI systems are already subject to regulatory requirements – such as algorithm registration, deep synthesis management, or generative AI service requirements—and where compliance with ethical requirements is embedded in those processes, additional expert review may be exempted. This provision aims to reduce administrative burdens and improve regulatory efficiency.
The measures do not introduce standalone penalty structure. Instead, violations may be addressed under existing laws, including the Cybersecurity Law, Data Security Law, Personal Information Protection Law, and Science and Technology Progress Law.
Role of Standardization
The measures place notable emphasis on the development of an AI ethics standards system in Article 4. They encourage the formulation of international, national, sector and association standards, as well as the establishment of platforms for international standardization cooperation.
In parallel, the measures call for strengthening supporting services such as testing, evaluation, certification, and advisory services. These are intended to enhance the capacity of third-party service providers and support responsible entities in meeting ethics review requirements, particularly small and medium-sized enterprises.
Conclusion
The Measures represent a further step in operationalizing China’s AI governance framework by embedding ethics review into the lifecycle of AI development and deployment. Through a risk-based approach, clearer institutional responsibilities, and alignment with existing regulatory mechanisms, they contribute to a more integrated and enforceable system. The particular attention paid to standardization is implying that this new measures will operate on the basis of standards. We will anticipate AI ethics standards springing up like mushrooms. Meanwhile, more Chinese voices on this particular topic will be heard in the international standards-setting.
For European stakeholders, ethics compliance has always been at the center of the topic, and it is becoming an increasingly important component of market access in China, particularly for higher-risk AI applications. At the same time, developments in standardization and third-party services may offer opportunities for engagement. Close monitoring of implementation and alignment with China-specific requirements will be essential. SESEC will also keep tracking the implementation practices of this measures and provide timely updates.
The departments that jointly issued this measures:
- Ministry of Industry and Information Technology (MIIT)
- National Development and Reform Commission (NDRC)
- Ministry of Education (MOE)
- Ministry of Science and Technology (MOST)
- Ministry of Agriculture and Rural Affairs (MARA)
- National Health Commission (NHC)
- The People’s Bank of China
- Cyberspace Administration of China (CAC)
- Chinese Academy of Sciences (CAS)
- China Association for Science and Technology (CAST)
Source: https://wap.miit.gov.cn/zwgk/zcwj/wjfb/tz/art/2026/art_c5039010f5d24e1593152a9355f9c51c.html



