ChinaTech #19 | Measures for Identifying AI-Generated Content
... And China's approach of "regulation as infrastructure"
Starting this month, China has become the first country in the world to comprehensively mandate the labelling of AI-generated content across all platforms.
The "Measures for Identifying Artificial Intelligence-Generated Synthetic Content" issued by the Cyberspace Administration of China (CAC) in March of this year just took effect. Essentially, the notice requires two types of labels.
Explicit identifiers refer to identifiers added during the generation of synthetic content or interactive scene interfaces, which are presented in the form of text, sound, graphics, etc. and can be clearly perceived by users.Implicit identifiers refer to identifiers that are added to the data of synthetic content files through technical measures and are not easily perceived by users.
The document goes into great detail about what constitutes these labels -
(1) Adding text prompts or common symbols or other signs at the beginning, end, or appropriate location in the middle of the text, or adding prominent prompts in the interactive scene interface or around the text;
(2) Adding voice prompts or audio rhythm prompts at the beginning, end, or appropriate position in the middle of the audio, or adding prominent prompts in the interactive scene interface;
(3) Add prominent warning signs at appropriate locations on the images;
(4) Add prominent warning signs at the beginning of the video and at appropriate locations around the video. Prominent warning signs may be added at appropriate locations at the end and in the middle of the video.
(5) When presenting a virtual scene, a prominent reminder logo shall be added at an appropriate location on the starting screen. A prominent reminder logo may be added at an appropriate location during the continuous service of the virtual scene;
(6) Other generated synthetic service scenarios shall add prominent prompt signs based on their own application characteristics.
Additionally, it also offers examples of “explicit content labels” -
Text labels in word or superscript form at different text positions.
Image labels at bottom-right corner with wording "AI-generated synthetic."
Audio labels with voice or "AI" Morse rhythm at audio start.
Video labels at start screen corners, visible for ≥2 seconds.
Of interaction interface explicit labels,
Continuous text near content or audio play areas, e.g., "AI-generated."
Labels shown at bottom or background of the interface.
And of file metadata implicit label format requirements -
Add an implicit label extension field in file metadata with keyword containing "AIGC."
Value follows JSON string structure with fields …
And the responsibility/ accountability for labelling AI-generated content is placed upon all three stakeholders across the value chain -
The AI service providers, which seems to mean the companies providing the generative AI models
The internet application distribution platforms
Content creators and users
As Poe Zhao says in his excellent article -
The policy’s core innovation is the establishment of a “full-chain responsibility” framework, an interlocking system of obligations that binds every actor in the digital content supply chain. This is not a new concept in China but an extension of a long-standing governance principle known as 压实平台责任 (yāshí píngtái zérèn), or “Compacting Platform Responsibility,” which has been the bedrock of its internet regulation for over a decade.
While it is unclear how these would be enforced, the systemic checks on stakeholders throughout the content value chain must help disincentivise non-compliance. Content can also be demarcated as confirmed, possible and suspected AI-generated content depending on the following -
(1) Verify whether the file metadata contains implicit identifiers. If the file metadata clearly indicates that it is generated and synthesized content, use appropriate means to add prominent warning labels around the published content to clearly remind the public that the content is generated and synthesized content;
(2) If no implicit identifiers are verified in the file metadata, but the user declares that the content is generated and synthesized, appropriate means shall be adopted to add prominent warning labels around the published content to remind the public that the content may be generated and synthesized;
(3) Where no implicit identifiers are verified in the file metadata, and the user has not declared that the content is generated or synthesized, but the service provider providing online information content dissemination services detects explicit identifiers or other traces of generation and synthesis, it shall be identified as suspected generation and synthesis content and appropriate measures shall be taken to add prominent warning labels around the published content to remind the public that the content is suspected of being generated or synthesized;
The document also warns against “scrubbing” of metadata or tampering with any existing labels for AI-generated content.
”No organization or individual may maliciously delete, tamper with, forge, or conceal the generated synthetic content identifiers stipulated in these Measures, provide tools or services for others to carry out the above-mentioned malicious acts, or damage the legitimate rights and interests of others through improper identification means.”
And all of this applies to firms that have their operations in China, independent of where they are headquartered. Given that this is the first such regulation to take effect globally, it waits to be seen what implications this has for global standards and corporations’ cross-border operations.
Much debate around this document has been on the impact of labelling content that has commercial applications - marketing, digital artists, etc - as AI-generated. Would engagement on the content take a hit? And what consequences does this have on adoption of AI tools for creativity?
But more importantly, the regulation is interesting in the approach it adopts to prevent population-scale mis/ disinformation. It is interesting that the regulation is focused on the “how” and nitty gritties of content traceability, instead of an abstract framework.
Zhao writes -
it reveals a coherent philosophy: treat regulation as infrastructure, not obstacle. While Silicon Valley pursues market-driven innovation and Brussels emphasizes rights-based compliance, Beijing is constructing a third path that integrates oversight into the technology stack itself.
And that is a powerful reason to watch and study China’s approach to tech regulation.



