
A significant change in global AI regulation has occurred with the release of a draft framework titled “Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services” for public comment by China's Cyberspace Administration (CAC).
Remarkably, this proposal focuses on the relationship layer of AI—systems that mimic personality, emotion, and human-like interaction—as opposed to previous laws that concentrated on content filtering or model-level safety.
According to the concept, the main safety hazards that require explicit design and operational controls are psychological injury, emotional dependency, and behavioural manipulation.
Any company or individual in mainland China that offers AI systems that imitate human personality traits, thought processes, or communication styles and that communicate emotionally through text, audio, visuals, or video is subject to the regulations.
The CAC distinguishes these systems from general-purpose foundation models and generic generative AI rules by defining “human-like, emotionally interactive” AI as a unique regulatory category. Both sector-specific regulations and this new framework must be followed by providers in regulated industries like healthcare, banking, and law.
The document adds new limitations specific to emotionally interactive AI, but it also includes normal Chinese limits on anything endangering public order or national security. Among them are:
Even when the underlying material is not expressly banned, these restrictions enable regulators to treat some intimacy-orientated design patterns as illegal.

The draft imposes stringent design and operating requirements:
Providers are required to submit security assessments for new functions and technologies when reaching significant user thresholds or when risks to national security or individual rights emerge.
App stores and distribution platforms must verify these assessments, with non-compliant services potentially removed, thereby making regulatory compliance essential for market access.
China's enforcement mechanism handles AI companion violations by issuing warnings and making adjustments.
Globally, jurisdictions like New York, California, and Texas are implementing laws on AI companions, alongside the EU’s AI Act, which tackles emotional manipulation and consumer protection. There is a growing international acknowledgement of psychological safety as a critical risk factor.
In conclusion, China’s draft framework defines emotionally interactive AI as a regulated domain, incorporating psychological safety into mandatory product design. This framework serves as a model for other jurisdictions as emotionally capable AI systems become more widespread.
Posted On: January 13, 2026 at 09:20:14 AM
Last Update: January 13, 2026 at 09:20:14 AM
Handpicked content to fuel your curiosity.