#72 AI Governance: Navigating Complexity
AI Governance - Navigating Complexity, Deciphering Tech Governance - Balancing Impact and Regulation
Today, Bharat Reddy reviews a report on regulating artificial intelligence as a complex adaptive system.
Rijesh Panicker sketches out a framework to think about regulatory regimes for technology.
Also
We are hiring! If you are passionate about working on emerging areas of contention at the intersection of technology and international relations, check out the Staff Research Analyst position with Takshashila’s High-Tech Geopolitics programme here. For internship applications, reach out to satya@takshashila.org.in.
Technopolitik: AI Governance - Navigating Complexity
— Bharat Reddy
The Economic Advisory Council to the PM has recently published a report titled A Complex Adaptive System Framework to Regulate Artificial Intelligence. The authors argue for treating artificial intelligence as a complex adaptive system (CAS) that “does not follow a predictable and fundamentally deterministic path”. They proceed to survey the regulatory approaches to AI governance across the world and suggest an approach for India to deal with the problem of regulating a CAS such as AI.
The report has many shortcomings, the foundational one being the comparison of AI to a complex adaptive system. The most common definition of a complex adaptive system is based on the work of John Holland. It is a dynamic network made up of multiple agents. These agents operate simultaneously and continually respond to the actions of others within the network. This interaction among agents affects individual behaviours and shapes the entire network's behaviour and structure. Examples of CAS include climate, cities, firms, markets and others.
The report claims that the AI systems we see today are black boxes - they are not explainable - and that can pose significant risks.
“The real problem is that AI can iteratively improve its capabilities beyond normal human comprehension if left unregulated. The state of AI poses significant risks, and concerted efforts are needed to make these systems more transparent and their decisions more interpretable to those who rely on them.”
This is true. They are trained on vast amounts of data and made to infer connections and patterns between the data. The seemingly intelligent conversational chatbots like chatGPT are trained on huge amounts of data that allow them to develop statistical representations of language that allow them to generate and predict a probabilistic response to a question. It is hard to make them explainable.
However, the claims that immediately follow in the report are baseless and not indicative of the capabilities of AI systems as we know them today. A recursively self-improving AI beyond human comprehension that manipulates perception simulates realities, and introduces mass schizophrenia seems like something from a science fiction movie.
“As AI rapidly approaches and potentially surpasses human-level capabilities, an array of profound dangers arises. One concern is the loss of control over recursively self-improving AI that eclipses human comprehension, also called runaway AI”
“However, more subtle dangers also arise from advanced AI's ability to manipulate perceptions and simulate realities. Through a combination of surveillance, persuasive messaging and synthetic media generation, malevolent AI could increasingly control information ecosystems and even fabricate customised deceitful realities to coerce human behaviour, inducing mass schizophrenia”
This narrative is indicative of AI Hype and doomerism. However, the real risks are more immediate and relate to issues of fairness, misuse of applications by human actors, the concentration of market power and a lack of accountability.
The report provides a detailed landscape of AI governance frameworks used worldwide with harsh criticism of the approaches. It talks of a ‘hands-off’ & self-regulation approach (USA) and a pro-innovation laissez-faire approach without teeth (UK), a risk-based approach (EU) might not work for a complex adaptive system. Finally, it mentions a direct bureaucratic control approach (China) that can have disastrous cascading effects.
The report suggests an approach for India to deal with the problem of regulating a CAS such as AI that should include the following five principles:
Establishing Guardrails and Partitions: Implement clear boundary conditions to limit undesirable AI behaviours.
Mandating Manual ‘Overrides’ and ‘Authorization Chokepoints’: Critical infrastructure should include human control mechanisms at key stages to intervene when necessary. Manual overrides empower humans to intervene when AI systems behave erratically or create pathways to cross-pollinate partitions.
Ensuring Transparency and Explainability: Open licensing of core algorithms for external audits, AI factsheets, and continuous monitoring of AI systems.
Defining Clear Lines of AI Accountability: Mandate standardised incident reporting and liability protocols to build skin in the game.
Creating a Specialist Regulator: A dedicated, agile, and expert regulatory body with a broad mandate and the ability to respond swiftly and proactively.
Again, many of these recommendations suggest the possibility of a runaway AI scenario where the AI becomes self-aware, continuously improves itself, and operates in ways that are beyond human understanding or control. True general-purpose AI that can flexibly perform a wide range of intellectual tasks at human levels does not exist yet. Currently, AI is narrow and specialised for particular applications. While AI has achieved superhuman performance in certain specialized tasks, replicating the broad general intelligence of humans involving flexible reasoning, creativity, and learning remains a monumental challenge.
India is set to host the Quad Leaders' Summit in 2024. Subscribe to Takshashila's Quad Bulletin, a fortnightly newsletter that tracks the Quad's activities through the Indo-Pacific.
Your weekly dose of All Things China, with an upcoming particular focus on Chinese discourses on defence, foreign policy, tech, and India, awaits you in the Eye on China newsletter!
The Takshashila Geospatial Bulletin is a monthly dispatch of Geospatial insights for India’s strategic affairs. Subscribe now!
Technopolitik: Deciphering Tech Governance - Balancing Impact and Regulation
— Rijesh Panicker
How should we regulate modern and powerful technology in a hyper-connected world?
When local regulations made elsewhere have the potential to impact us negatively, it would be good to tease out how regulatory regimes might impact us depending on the technology.
Let us look at two recent ideas. First, a podcast between Bill Gates and Sam Altman of OpenAI (Do listen to the whole interview for a wide-ranging discussion on AI and its impact). At around minute 8, in response to Altman’s suggestion that AI might need a global regulatory regime, Gates responded that it would almost be like a world government. Utopian (or dystopian) musings apart, Altman’s suggestion, in essence, is to create a global regulator akin to the International Atomic Energy Agency (IAEA), allowing for a model of international inspections and validation for models above a certain threshold. Concerns around the creation of oligopolies apart, this may well be the way to go should the world agree that AI technology could prove to be too powerful.
The second is a news report about amendments to the UK’s Investigatory Powers Act (IPA). Under the proposed amendment, if the UK does not approve new security features introduced by tech firms, they will be unable to offer them to customers worldwide. Here we have a case of a local regulation which could have large negative effects everywhere and for everyone.
So, how could we think about regulatory regimes and their possible impacts? A simple framework to think about new technology and its regulation as a 2x2 matrix across regulatory regimes vs their impact is depicted below:
Global regulation is not a new phenomenon, of course. From truly global regulators like the IAEA that monitor the use of nuclear energy to agreements on the non-use of bio-weapons, countries have always found ways to collaborate and work with each other in specific areas. It is conceivable that AI regulations could fall in this area.
There is, however, another set of regulatory regimes in areas like money laundering and anti-terrorism. These are what we might consider “Globally Local” - A global set of internationally recognised standards and best practices, combined with international cooperation and data sharing, but ultimately applied using local laws that adapt to the context of a country.
Most tech regulation in the past would have fallen in the bottom left, local regimes with localised impacts. However, with the rise of multinational corporations that serve global markets and technologies like AI, we see something different. Today, regulations created by states can have spill-over effects for others. The best example would be the EU’s Global Data Protection Regime (GDPR), which forced websites and apps everywhere to change how they did consumer consent.
This brings us to the top left of the matrix, “Locally Global” - where laws and regulations in one country can have unintended consequences for others. The aforementioned amendment to the IPA by the UK is a good example of this, with the British government effectively controlling if new features can be released everywhere else in the world and with no public transparency about its decisions as well.
Given the competitive nature of today's geopolitics, the idea of a global regulator that monitors and controls critical technologies seems increasingly fragile. The chances are that we may see more “Locally Global” regulatory regimes where the decisions of a few powerful nations impact the rest of us. From an Indian perspective, pushing towards a “Globally Local” regime - where we have internationally agreed-upon standards and norms but local laws that are built based on context, allows us to maintain some level of influence and control on how these technologies are deployed and used.
What We're Reading (or Listening to)
[Podcast: All Things Policy] Combating Deepfakes ft. Rohan Pai and Bharat Reddy
[Podcast: All Things Policy] China’s Influence on Global Media ft. Dr Sriparna and Rakshith
[Opinion] India and China’s Volatile New Status Quo, by Manoj Kewalramani
Really insightful