#66 AI or Nay : The EU AI Act's Legislative Conundrum
CyberPolitik: The EU AI Act - To Be or Not To Be, Let’s Talk About the “I” in the IPMDA
Today, Rijesh Panicker charts the tumultous discourse around the EU’s AI Act, while Bharat Sharma shines a light on New Delhi’s perception of the Quad’s maritime security initiatives.
Course Advertisement: Admissions for the Jan 2023 cohort of Takshashila’s Graduate Certificate in Public Policy (Technology and Policy) programme are now open! Visit this link to apply.
CyberPolitik: The EU AI Act - To Be or Not To Be
— Rijesh Panicker
Following the White House's Executive Order on AI and the Bletchley Declaration on AI in November, the EU AI Act was expected to be the first comprehensive legislation regulating AI across the European common market. Recent developments, with Germany, France and Italy strongly opposing any regulation of foundational models, raise concerns that discussions between the European Parliament (EP), the European Commission (EC) and member states may fail entirely or produce a diluted regulatory regime for large AI models.
The story so far ...
The AI Act proposed originally in 2021, sought to regulate the development and deployment of AI products and services in the EU common market. The act created a risk-based framework to classify AI models based on their end use.
It sought to create a set of rules that would identify risks created by AI applications, proposed a list of high-risk applications, set requirements and defined specific obligations for such AI applications, proposed conformity assessments before these systems could be deployed, and enforcement rules post-deployment.
AI models were classified at four levels of risk:
minimal risk- AI-enabled video games and spam filters which are unregulated
limited risk - systems such as chatbots which have transparency obligations such as letting users know they are interacting with an AI
high risk - AI systems used in critical infrastructure, education, healthcare, law enforcement, etc., which would be subject to risk assessment and mitigation and would need to be trained on high-quality data and provide traceability of its results along with human oversight
unacceptable risk - systems considered a threat to people's safety, livelihood and rights, such as real-time biometric identification, social credit scoring systems, etc. These were to be banned entirely within the EU
Unfortunately, while the original regulations were made with narrow AI (built for a specific goal) in mind, the rise of generative AI models like ChatGPT in 2022 and the concerns around their ability to be general-purpose meant that the EU act would have been dead on arrival.
In June and October 2023, changes have been proposed, including a tiered regulatory structure (see image below) that will subject the most capable models to additional obligations. This was in addition to horizontal obligations for all of these models around documentation, benchmarking and pre-launch assessments of the model. For a more detailed discussion of the thresholds, see here.
This framework, which imposed regulatory burdens on just the largest actors like ChatGPT and Bard, has now seen pushback from Germany, France and Italy, who insist that the regulations will dampen innovation and disadvantage Europe. Instead, they have proposed that regulations be applied to specific applications that use the foundational models while the models themselves are allowed to self-regulate.
Those opposed to these changes in the regulations (see here) argue that foundational models are the best place to control and mitigate the risk, the creators of these models are best placed to mitigate these risks and that regulating the application of AI will place too high a cost on smaller firms.
As the EU, the EP and member states head in for a final set of discussions today (6th December 2023), it is unclear if we will see a compromised and weakened regulation or, indeed, no regulation at all since today is the last day for negotiations possible before this European Parliament is disbanded next June.
Some What Ifs to Consider …
As we wait for the outcome, it is interesting to consider possible outcomes that might come about if the regulations come into play:
Threshold-based classifications for large, foundational AI models creates incentives for players like Google OpenAI to offer modified and slightly less powerful versions of their models in the EU (think, Nvidia creating China-specific AI chips that fell outside the purview of the US ban), putting European firms that build applications at a disadvantage. It also makes it more costly for European large AI models unless they are subsidized in some form.
In the long run, threshold limits could drive innovation in design and algorithms that allow smaller models to perform the same as larger AI models. This could lead to smaller models posing the same level of risks as today's large models.
It may also be the case that large AI model creators may be unable to identify all the risks emanating from the use of their model. It is possible that a risk that is obvious in hindsight once a particular prompt creates a bad response from a generative AI model is unknowable before that event.
Similarly, let us consider the case where foundational models are allowed to self-regulate while end use is regulated based on risk tiers:
On the surface, this would seem to impose too high a compliance cost on smaller firms while not imposing any cost on the creators of these models. However, it also creates a market opportunity in parallel, with a huge demand for new AI models that solve these problems. Any firm that can create a new model that uses data better (handles copyright issues, bias etc) and is able to help reduce the risk to its end user will be preferable to existing models. A set of well-crafted usage-based regulation may well stimulate competition in the large model arena.
This is not to argue that a world without AI regulation is better. There are some risks from large foundational models that are pervasive across use cases (for e.g. bias in outcomes, using copyrighted data improperly, hallucinations by the model) and are better solved at the model level by those with the capital and know-how to do so. Other risks may only become more apparent as applications are being developed and are solved better with sectoral regulations and norms.
Countries will need to continually create and adapt their regulatory frameworks to keep pace with the advances in AI, like building an airplane while flying it (maybe while an AI is flying it?)
India is set to host the Quad Leaders' Summit in 2024. Subscribe to Takshashila's Quad Bulletin, a fortnightly newsletter that tracks the Quad's activities through the Indo-Pacific.
Matsyanyaaya: Let’s Talk About the “I” in the IPMDA
— Bharat Sharma
The Quad, composed of India, the United States, Japan, and Australia, began as a grouping focused on achieving a humanitarian response to the 2004 Indian Ocean Tsunami. Today, it holds a mammoth public goods agenda spanning numerous areas, from education to maritime security.
The Quad countries’ leaders and ministers have repeatedly emphasised its non-security credentials. However, looking at the increasing defence and security engagement between them — although not under the Quad “banner” — seems to create questions about whether the Quad would move to formalise security and defence engagements within itself. These questions are also visible when it comes to the Quad’s maritime security initiatives and aspirations, especially because the quartet already conducts annual military exercises — ‘Malabar’ — focused on wartime interoperability. The exercise indicates — to a certain degree — synergy on maritime issues and the solutions the four intend to pursue.
The Indo-Pacific Maritime Domain Awareness Initiative (IPMDA) is the Quad’s flagship initiative to help strengthen the current state of maritime security in the Indo-Pacific. Since its 2022 inception, it has focused on providing a “near-real-time, integrated, and cost-effective maritime domain awareness picture” in the Pacific Islands, Southeast Asia, and the Indian Ocean Region (IOR). It aims to do this by providing commercial satellite data to Quad’s Indo-Pacific partners. This would help build maritime domain awareness (MDA) capabilities that would aid in tackling non-traditional security risks like illegal fishing, maritime terrorism, and humanitarian or disaster responses, among others.
For India’s interest in the IOR and the neighbourhood, the IPMDA is of particular concern. The IPMDA would complement India’s MDA initiatives in the IOR, particularly by aiding the creation of a network of fusion centres, including by supporting these centres. Two things remain to be seen: which technologies are given a filip by the Quad — and in what capacity— particularly the Automatic Identification System (AIS) and Vessel Monitoring System (VMS). The second is how space-based solutions to problems of monitoring our seas will be utilised, given their novelty, and how the Quad’s space-based initiatives will be equipped for maritime cooperation.
The other more important question is how New Delhi sees the IPMDA: whether the Quad’s public goods agenda in the IOR will be of concern for India, particularly considering that it might pose to be a fertile ground for Sino-US competition, in the backdrop of growing Chinese presence in the region. India has historically been sensitive about other players in the region, particularly the US. So, the question is what the Quad’s presence means for India’s aspirations in the IOR, given that three out of four Quad countries are the US and countries part of the US alliance system.
What We're Reading (or Listening to)
[Report] Singing from the CCP’s songsheet, by Fergus Ryan , Matt Knight & Daria Impiombato
[Podcast: The Seen and the Unseen] Episode 358: The Semiconductor Wars, ft. Pranay Kotasthane, Abhiram Manchi, and Amit Varma
[Podcast: Brookings Commentary] India’s technology competition with China, ft. Pranay Kotasthane, Trisha Ray, and Tanvi Madan