ChinaTech #5 Regulating AI Recommendation Algorithms
... China's fascinating experiments in iterative and incremental AI governance
This new segment by Shobhankita Reddy is your go-to newsletter for updates and perspectives on China’s tech ecosystem. This edition is about China’s attempts to regulate AI recommendation algorithms.
In a previous edition of this newsletter about AI regulations in China, I mentioned China's piecemeal approach that started with regulations on recommendation systems in 2022.
This notification was jointly issued by the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, the Ministry of Public Security and the State Administration for Market Regulation.
A few pointers from the notification that deserve mention are as follows.
In the notification, "Algorithmic recommendation technology" is defined as "the use of algorithm technologies such as generation and synthesis, individualized pushing, sequence refinement, search filtering, schedule decision-making, and so forth to provide users with information."
The state internet information department, the relevant departments of the state council associated with the four top bodies that issued the notification, and their local department counterparts for each administrative region have been called upon to ensure the implementation, management, and oversight of recommendation technology in accordance with the notification.
Article 6 of the notification, with the usual restrictions on the use of the technology to "endanger national security or the societal public interest, disrupt economic and social order, or harm the lawful rights and interests of others", has an interesting turn of phrase that algorithm providers must endeavour to do - "actively transmit positive energy, and promote the uplifting use of algorithms".
Calling on service providers to "periodically check, assess, and verify algorithm mechanisms, models, data, and outcomes", the notification requires "conspicuous labelling" of synthesized content.
Article 8 turns paternalistic and discourages algorithms from "inducing users to become addicted or spend too much"
Article 10 of the notification is specific - it mandates that "keywords" and "user tags" should neither be unlawful nor negative information.
Article 11 dictates that "in key steps such as the homepage, home screen, hot searches, selections, top content lists, and pop-up windows", the information presented or generated by algorithms should comply with “mainstream values”.
A long section on the “Protection of User's Rights and Interests” has several articles focused on end users' right to turn off recommendation systems, be made aware of the principles, purposes and operating mechanisms of the services provided to them as well as the setting up of consumer grievance mechanism portals. It also requires algorithms that power peoples' access to jobs via marketplaces to protect their right to receive salary, rest, and vacation and ensure a fair allocation of orders, rewards and penalties.
The section on "Oversight and Management" is substantive. Article 23 mandates the establishment of a -
"a hierarchical and categorical management system to conduct management by grade and category of algorithmic recommendation service providers based on the algorithmic recommendation services' public sentiment attributes and capacity to mobilize the public, the content types, the scale of users, the importance of the data handled by the algorithmic recommendation technology, the degree of interference in user conduct, and so forth."
All algorithmic recommendation service providers that "have public opinion properties or capacity for social mobilization" are required to report to an "internet information services algorithm filing system" with an "algorithm self-assessment report" and other such details.
This filing system and registry were, indeed, implemented. The website of the registry is here.
Matt Sheehan and Sharon Du studied the registry and the many fields it required companies to disclose - the open-source or self-built datasets that were used to build algorithms, whether or not the algorithm used "biometric features" or "identity information" (personally identifiable information) and fields to upload PDF files called "Algorithm Security Self-Assessment".
They have an important takeaway.
By exploring the user manual, we see that what the registry requires from Chinese companies is both more and less than previously understood. More, because the manual reveals significant new disclosure requirements that do not show up in the public versions of the filings. The requirement to enumerate data sets is self-explanatory, while the algorithm security self-assessments could be anything from cursory to comprehensive. Less, because some had taken the registry filing requirements to mean that the Chinese government could now gain direct access to the algorithms or the underlying code. This does not appear to be the case, and further reporting supports that conclusion.
These experiments in AI regulation being carried out by Beijing are fascinating. Trying to causally and linearly understand a complex deep learning model with the potential for emergent behaviour, usually considered a "black box", has important lessons for the global AI governance landscape.
For now, we know that companies are not being required to submit code or user data. However, the same WSJ article reports the following -
Shortly after the Chinese regulation came into force, government-relations managers and algorithm engineers at ByteDance met with Cyberspace Administration officials to explain the documents they submitted, people familiar with the matter said. During one of those meetings, officials at the agency displayed little understanding of the technical details and company representatives had to rely on a mix of metaphors and simplified language to explain how the recommendation algorithm worked, one of the people said.
Companies haven't been required to submit code or user data, the people said.
Chinese government guidelines issued last year called for multiple agencies to expand staff to supervise algorithms.
"They're trying to build the tools, hire the people and get the technical expertise to tackle this kind of stuff," said Kendra Schaefer, head of tech policy research at Beijing-based strategic advisory consulting firm Trivium China. "So enforcement of this is going to ramp up slowly over the next five to 10 years."
This was 2022, and for the last two years, China has been toying with the idea of a National AI law. In light of China signing the Paris AI Action Summit, it has been pointed out that -
China, meanwhile, is playing both sides: pushing for control at home while promoting open-source AI abroad.
It would be interesting to track how things unpack here.