On 27 December 2025, the Cyberspace Administration of China (CAC) launched a Call for Comments on the Interim Measures for Administration of Interactive Services of Human-Like AI (hereinafter referred to as the Interim Measures). The comment period will remain open until 26 January 2026.

The Interim Measures aim to strike a balance between encouraging technological innovation and preventing potential risks, ensuring that the development of AI always serves human well-being, rather than becoming a tool that alienates human emotions or distorts human cognition.

The Interim Measures are formulated in accordance with the principle of people-centered development and aim to regulate providers of Human-Like Artificial Intelligence Services.

As defined in the draft, Human-Like AI service providers refer to organizations or individuals offering anthropomorphic interactive services using artificial intelligence technologies. Specifically, any entity that develops and make available to the public in China – via text, images, audio, or video – products or services capable of providing emotional support, digital companionship, or other forms of human-like interpersonal interaction shall be subject to these regulations.

The draft Interim Measures largely build upon China’s existing governance framework. Many of the systems outlined in the Interim Measures are in line with China’s current laws and regulations.

  • Algorithm filing and transparency obligations

The Interim Measures extend the requirements set out in the Provisions on the Administration of Algorithmic Recommendations for Network Information Services (2022). They mandate that service providers file their algorithms for record and undergo an annual written review of assessment reports and audits conducted by provincial-level cyberspace administrations. Additionally, Interim Measures require application distribution platforms (such as internet app stores) to fulfill their safety management responsibilities, including verifying service providers’ safety assessments and filing status.

  • Content security and labeling obligations

 

The interim measures follow the practices established in the Provisions on the Administration of Deep Synthesis of Internet-based Information Services (2022) and the Interim Measures for the Administration of Generative Artificial Intelligence Services (2023). They require providers to clearly inform users that they are interacting with artificial intelligence rather than a natural person. When users exhibit signs of excessive reliance or addiction, or during their initial use or re-login, providers must dynamically remind users – through measures such as pop-up notifications – that the interaction is generated by artificial intelligence. At the same time, service providers bear primary responsibility for content security. They must implement safety measures throughout the design and operation phases, conduct safety assessments, and take appropriate actions (including restricting, suspending, or terminating services) when significant safety risks arise. Such incidents must also be reported to the relevant authorities.

 

  • Data security and personal information protection

 

The interim measures strictly adhere to the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law. With respect to data security, providers are required to adopt measures such data encryption, security audits, and access controls to protect user interaction data during service provision and allow users to delete their interaction data. Regarding personal information, providers are prohibited from using user interaction data or sensitive personal information for model training.

Notably, in response to the unique risks associated with human-like interactive services, the Interim Measures introduce several innovative governance mechanisms tailored to human-like interactive services.

  • Human-Like Interaction Labeling Mechanism

 

The mechanism requires service providers to continuously display labels such as “This service is powered by artificial intelligence and does not possess human emotions or consciousness” in prominent locations, reminding users that they are interacting with artificial intelligence rather than a natural person. This system resembles the requirements from the Interim Measures for the Management of Generative Artificial Intelligence Services. It aims to fundamentally address the issue of user cognitive confusion and help users establish appropriate psychological expectations.

 

  • User Psychological Protection Mechanism.

To mitigate risks related to emotional dependency and mental health, the Interim Measures require service providers to establish mechanisms to protect users’ psychological well-being. For example, implementing anti-addiction reminders and intervening in prolonged or high-intensity interactions. If a user continuously uses the service for more than 2 hours, dynamic reminders, such as pop-up notifications, must be issued. Additionally, providers must establish psychological crisis intervention mechanisms to identify and guide users who express extreme emotions or behaviors. In such cases, human agents should take over the conversation and provide access to professional assistance channels.

(3) Special Group Protection Mechanism

The Interim Measures explicitly require special protection for groups such as minors and the elderly. For instance, for minors, usage time limits should be set, and they should be prohibited from accessing content that may induce inappropriate behavior or values. For the elderly, providers should guide them to set up emergency contacts. If risks to life, health, or property are detected during use, emergency contacts should be promptly notified, and professional assistance channels should be provided. This reflects a protective tilt toward the rights and interests of vulnerable groups.

Once the Interim Measures are promulgated, service providers will likely need to comply with the existing mandatory standards GB 45438-2025 Cybersecurity technology – Labeling method for content generated by artificial intelligence to fulfill their Human-Like AI Labeling obligations.

While not yet confirmed by the CAC or other authorities, SESEC anticipates that relevant national standards will follow to standardize these new mechanisms. Past regulatory trends and the inherent complexity of defining quantified thresholds for Human-Like AI in a high-level document like the Interim Measures makes such supplementary standards necessary to provide specific compliance instructions.

SESEC will continue to monitor the development of the Interim Measures and provide timely updates.

Source: https://www.cac.gov.cn/2025-12/27/c_1768571207311996.htm