#76 The Unwelcome Implications of India's Latest AI Advisory
Is India’s Latest AI Advisory A Good Idea?, A Policy Window For Premature AI Regulation?
Today, Rijesh Panicker offers an incisive examination of the implications for innovation and open-source model adoption in the wake of India’s latest AI advisory to a few intermediaries, while Zoe Philip's guest post ruminates on why the government may want to regulate this nascent space at this point in time.
Also,
We are hiring! If you are passionate about working on emerging areas of contention at the intersection of technology and international relations, check out the Staff Research Analyst position with Takshashila’s High-Tech Geopolitics programme here. For internship applications, reach out to satya@takshashila.org.in.
Cyberpolitik: Is India’s Latest AI Advisory A Good Idea?
— Rijesh Panicker
MeitY’s latest advisory to intermediaries and platforms regarding the use and deployment of AI models has raised significant concerns among industry players. The advisory comes in the wake of Google’s new Gemini model outputting what many saw as factually incorrect characterisations of PM Modi in response to questions asked of it.
The ostensibly non-binding advisory aimed to reiterate the obligations of intermediaries and platforms under the IT Rules, 2021. However, both the advisory and the clarifications posted by the Minister of State for Electronics and Information Technology raise more questions than the answers they seek to provide.
Firstly, the advisory has added several concerning rules for compliance. Key among these is a requirement that AI models that are “under-tested”/ unreliable cannot be deployed without the explicit permission of the government on the “Indian internet” and only after appropriate labelling of the outputs.
Secondly, platforms and intermediaries are required to inform users that their use of the data may lead to loss of access, deletion of their accounts, or other criminal actions, as per the law. Platforms also need to ensure that information that may be used as fake news or for disinformation is properly labelled as misinformation and with identifiable metadata.
This current advisory raises several questions about India’s suddenly non-light-touch approach to AI regulation and its impact on a nascent market. Firstly, the need for explicit government permission is a throwback to the “Licence Raj,” with the state deciding who can play. History tells us that this is not conducive to the creation of a competitive marketplace.
Secondly, the advisory sorely lacks definitional clarity. What makes for a well-tested and reliable AI model is still an open question. Some experts argue that the false or hallucinatory outputs of generative AI models like LLMs are a part of their architecture and cannot be eliminated, while others argue that appropriate testing and guardrails can solve the problem.
An unintended but certainly predictable consequence of this advisory is its likely impact on the use of open-source AI models. Since open-source models, by definition, do not have a single source of ownership, who is responsible for their outputs?
A lack of clarity here means that a startup looking to use AI today will likely go with the proprietary models of large players like Google and OpenAI simply for the cover it provides them from potential government action. This advisory, as it stands today, will dampen innovation in the open-source AI space in India, driving our startup ecosystem further into the hands of large firms.
Although the ministry clarified that the advisory did not apply to startups but only to large platforms, this remains unconvincing given the broad wording of the advisory. This appears to be a drastic departure from the government’s previous pro-innovation stance.
While regulating AI may be necessary to contain potential risks and maintain competitive markets, India will be better served by a thought-out and clear rules-based approach to the various aspects of the AI ecosystem. One of these aspects is whether we should regulate AI models themselves as opposed to the use cases.
Rather than reactive and vaguely worded advisories, MeitY should work with academia, industry (large firms and startups) and the open-source community to identify how we might create appropriate standards, benchmarks and frameworks to mitigate the risks from AI models.
India is set to host the Quad Leaders' Summit in 2024. Subscribe to Takshashila's Quad Bulletin, a fortnightly newsletter that tracks the Quad's activities through the Indo-Pacific.
Your weekly dose of All Things China, with an upcoming particular focus on Chinese discourses on defence, foreign policy, tech, and India, awaits you in the Eye on China newsletter!
The Takshashila Geospatial Bulletin is a monthly dispatch of Geospatial insights for India’s strategic affairs. Subscribe now!
Cyberpolitik: A Policy Window For Premature AI Regulation?
— Zoe Philip
Earlier last week, MeitY announced that any Generative AI model, tool, or AI program under development that still needs to be thoroughly tested must undergo a permit process and receive government approval before making the technology available online. This makes the prospect of starting up a new firm that wants to generate AI tools or develop new technologies for the public subject to state approval. Leave it to X (Formerly Twitter) to tear it apart almost as soon as it was out, calling it the death knell of India's innovation future and the beginning of the AI Licence Raj.
The Union Minister later clarified that this was not aimed at start-ups and only sent to a few social media intermediaries, hoping this would soothe the distress. But is there a burning need to clamp down on AI tools now? As with most things in life, the answer to this question is timing.
John Kingdon is popular in traditional policy studies courses for bringing out the Multiple Streams Framework in 1984, a method of understanding where policy intervention is most likely to occur.
Three streams come together to open what is known as a 'policy window'. They intersect like this:
First, the issue or problem must be well understood by the government, experts, and the broader community. Second, the solution needs to be available and feasible. Above all, political will must be the priority and should be forthcoming when the time is right.
When the three come together, the effectiveness of a policy intervention is at its highest. This is usually the case when a regulation is most likely to be accepted by a larger group and more stakeholders. This means that the cascading/bandwagon effect usually follows, and implementation becomes easier because more people are on your side.
So, what's missing in the advisory released? The announcement comes out at a time when the nascent Indian AI ecosystem has just begun to blossom. If every sixth AI researcher/developer is an Indian and the mid-market is 72% bullish on growth solely because of AI innovation, then it is clear that this is something that excites a new slew of entrepreneurs in tech and inspires confidence in its future. And while the advisory sets the tone for what the ministry hopes will be a full-fledged regulation, there is just not enough information on the technology to go around informing government officials just yet.
New India-specific AI models and applications continue to be constantly tested as they are deployed; the industry, if anything, has the will to see this transformation through. Crushing that spirit now by sending a signal that an extraordinarily burdensome and ill-informed compliance regime is on its way will disrupt this growth. The perception of an intervention like this itself would render the various incentives the government offers to players in this ecosystem ineffective.
Ex-Ante regulation is not necessarily a bad thing. The solution to effective ex-ante regulation is a consultative "ask us (the stakeholders) before doing something," which should be easy enough for a government that has been marketing itself as business-friendly. Doing it as an apparent reaction to a GenAI tool prompt response that may or may not reflect the political views of the prompter is premature, as the intent seems to be to try and hold someone accountable as quickly as possible without understanding the way the underlying technology functions.
But when governments take their time to really understand the problem, people do, too. Kingdon would argue that political will is essential and flows directly from understanding the problem better and taking the time to do so. Waiting can seem like a weakness when the heat of political pressure bears down, and in an election year, that is a no-go.
At the end of the day, you want solutions that stick and an innovation ecosystem that does not have to battle the state constantly.
**Zoe Philip is a 2nd year MPP student at the National Law School of India University and an intern with the Takshashila Institution’s High Tech Geopolitics Programme**
What We're Reading (or Listening to)
[Takshashila Discussion Document] Geo-consumerism and India-China Competition: A Comparative Assessment of Consumption Data, by Amit Kumar
[Opinion] India gains semiconductor momentum, but the policy mix can be even better, by Amit Kumar and Satya S. Sahu
[Opinion] What binds the Quad, by Bharat Sharma