ChinaTech #3 Regulating AI, A Balancing Act
... with Chinese characteristics, of course
This new segment by Shobhankita Reddy is your go-to newsletter for updates and perspectives on China’s tech ecosystem. This edition seeks to understand China’s approach to domestic AI regulation.
The EU Artificial Intelligence (AI) Act prides itself on being the "first comprehensive regulation on AI by a major regulator anywhere." It attempts to classify all AI applications based on their potential risk, outright banning some applications and placing the onus of responsibility on the developers of these systems. General Purpose AI, loosely defined as AI models and systems built from large training data, the outputs of which tend towards general intelligence, has further requirements about technical documentation, copyright compliance, model testing, and cybersecurity protection.
In contrast, China has had a piecemeal approach to AI regulation.
Among the first nationwide measures to be adopted was the 2022 regulation focused solely on recommendation algorithms followed by one on 'Deep Synthesis' (deepfakes). The most significant one came a year later via the 'Interim Measures for the Management of General Artificial Intelligence Services' notification.
This suggests a cautionary and iterative approach to AI governance that may not be coming from a central dictum. The above-mentioned regulations were jointly issued by multiple bureaucracies, such as the Cyberspace Administration of China (CAC), the Ministry of Science and Technology, Ministry of Industry and Information Technology, indicating a more consultative process that is likely to have had inputs from research institutions and thank tanks, the most note-worthy of which is the China Academy of Information and Communications Technology.
This is emblematic of the Chinese administration's view on general-purpose technology (GPT). On the one hand, Beijing acknowledges its role in leading China into technological supremacy, but on the other hand, it understands the threat it poses to state control and party agenda. The focus is to promote innovations in these technologies while streamlining their usefulness for national priorities, which are encapsulated in Xi Jinping's slogan 'four facings/ orientations'. This calls for Chinese scientists to "persist in facing the world's scientific and technological frontiers, the main battlefield of the economy, the major needs of the country, and the people's life and health".
The Chinese State's attempts to curtail the ill consequences of artificial intelligence may be compared to its crackdown on the internet a decade ago, attempts that Bill Clinton had called "trying to nail jello to the wall". Championing the idea of 'internet sovereignty' - that a nation's internet, like its borders, must be monitored and defended - China, today, runs the world's most expansive and coercive digital surveillance system on its citizens while being able to foster an indigenous and innovative tech ecosystem curated to its censorship demands. This is in conjunction with western tech firms not being allowed to operate in China - called the Great Firewall .
This did not come easily. Massive resources and creative ingenuity were needed. What started as requirements for Chinese social media companies to self-regulate soon escalated into a fully fledged national imperative that involved reducing the time between a flagged piece of online content to authorities showing up at the location of the IP address, arresting a large number of citizens, publicly shaming celebrity influencers, and threatening companies with shutdowns if they did not comply. Deng Xiaoping's statement - "If you open a window for fresh air, you have to expect some flies to blow in" - offers an illuminating perspective to understand the Chinese take on internet censorship.
Could the Chinese State do the same with AI?
The 2023 regulation on Generative AI (GenAI) tries a balancing act. At the outset, it states that the measures do not apply to any institutions that "develop and apply generative artificial intelligence technologies but do not provide generative artificial intelligence services to the domestic public".
Article 4 of the notification states that GenAI systems must –
"Adhere to the core socialist values and do not generate content prohibited by laws and administrative regulations such as inciting subversion of state power, overthrowing the socialist system, endangering national security and interests, damaging the national image, inciting the secession of the country, undermining national unity and social stability, promoting terrorism, extremism, ethnic hatred, ethnic discrimination, violence, pornography, and false and harmful information".
This means that GenAI systems that are not publicly facing have been allowed a free rein, without political constraints, to experiment in their research and commercialize their offerings. However, any chat interface, image, or video generation platform that is consumed directly will be held accountable for content that the State might consider damaging.
This should explain, for example, why DeepSeek's chat interface refuses to answer politically sensitive questions but the same R1 model deployed by Perplexity is less encumbered. In line with the notification, the backend AI model does not face the same regulatory guardrails as the chat interface.
The AI regulations, so far, including the ones pertaining to recommendation algorithms and deepfakes, seem sparse and reactionary in nature. China has been toying with the idea of a National AI law for the past two years. It remains to be seen if, like the case with internet censorship, AI regulation will be further tightened in the country going forward. This would depend on the State's assessment of the success of the existing regulations.