In this edition of Technopolitik, Arindam Goswami writes about AI sovereignty and the role of the state, and looks at the different approaches of different countries. Adya Madhavan follows with a piece on Musk’s Grok AI and how it uses content on X as sources in the context of the wider free speech vs. regulations debate.
This newsletter is curated by Adya Madhavan.
Technomachy: National AI Labs: The Race for State-Backed Innovation Dominance
— Arindam Goswami
Nations are increasingly recognising that leaving artificial intelligence (AI) development solely to private companies could threaten their technological sovereignty. This is especially true considering the concentration of AI resources in the hands of a few big technology companies. Across the world, governments are, therefore, establishing and funding national AI laboratories, creating a new landscape of state-backed innovation that's reshaping the global technology order, and fueling further debates on the role of governments in research and development (R&D).
There is a discernible shift from the private sector-led AI development that characterized the past decade. It is true that companies like Google, OpenAI, and Anthropic continue to invest a lot of money in AI development. However, governments are now increasingly asserting their role in ensuring their countries maintain competitive advantages in this global race.
Different Countries - Different Approaches
China is perhaps the most striking example of state-directed AI research. The Beijing Academy of Artificial Intelligence (BAAI) and the Institute of Automation, Chinese Academy of Sciences (CASIA) serve as central hubs in China's AI strategy. While BAAI is a Chinese non-profit artificial intelligence (AI) research laboratory, both these institutions benefit from substantial government funding and work in close coordination with universities and private companies. The Chinese model demonstrates how national labs can bridge fundamental research and practical applications while advancing specific national interests.
In contrast, the United States has adopted a more distributed approach. Rather than creating a single national AI lab, it has strengthened existing institutions like DARPA and established new initiatives through the National AI Research Institutes program (25 in number, launched in 2020 by the National Science Foundation (NSF)). These are networks of specialized labs, which emphasise on partnerships between government agencies, universities, and private companies. It ties in well with America's traditional preference for decentralized innovation while maintaining strategic oversight of critical technologies, as evidenced by DARPA and NASA models of innovation.
The European Union has taken yet another path, emphasizing cross-border collaboration while supporting national initiatives. France's National Institute for Research in Digital Science and Technology (INRIA) and Germany's German Research Center for Artificial Intelligence (DFKI) exemplify how European nations are building domestic capabilities while participating in broader EU-wide programs. This is a dual-track approach that aims to maintain European competitiveness while promoting shared values in AI development. Even the creation of Current AI with its $400 million investment in public interest AI development and the formation of a new environmental sustainability coalition to address AI's environmental impact are consistent with the EU's collaborative approach.
Meanwhile, smaller but technologically advanced nations are crafting distinctive approaches to remain competitive, leveraging their own competencies and competitive advantages. Israel's AI research program leverages its strong military-technical ecosystem and startup culture. South Korea's National AI Research Hub combines government resources with the technical capabilities of its major technology companies. These are ways in which medium-sized powers can develop specialized niches in the global AI landscape.
India has taken an ambitious approach by establishing a network of AI research institutions. The core of this network is AIRAWAT (AI Research, Analytics and knoWledge Assimilation plaTform), a national AI computing infrastructure. The country has also created AI-specific centers of excellence through its premier technical institutions (IITs) and research organizations like C-DAC (Centre for Development of Advanced Computing). The Digital India initiative supports these efforts through programs like the National Program on AI and the IndiaAI Mission. What makes India's approach distinctive is its focus on developing AI solutions for domestic challenges in healthcare, agriculture, and education, while simultaneously building capabilities in advanced AI research. The establishment of the Artificial Intelligence and Robotics Technology Park (ARTPARK) in Bangalore, which gets substantial government funding, underlines India's commitment to creating institutional AI research capabilities.
The competition for talent has become a crucial dimension of this state-backed AI race. National labs should increasingly be used as instruments to prevent "brain drain" and instead, cultivate domestic expertise and brain circulation. Competitive salaries, research freedom, and computing resources are needed to attract and retain top AI researchers. This will be said to have become an effective intervention only when this leads to interesting dynamics where scientists might choose national labs over prestigious private sector positions, particularly when projects align with public interest goals.
The Flip Side
However, there is a flip side to all of this too. This rise of national AI labs will create new problems for international scientific collaboration due to various reasons. National security concerns will be one of them. In today’s heightened geopolitical tensions, there will be some countries implementing stricter controls on research sharing and joint projects. When done in governmental laboratories, these considerations will always take precedence, even if for the purposes of domestic political display and not intent. The delicate balance between open science and national interests has, therefore, become more complex, particularly in areas like advanced AI architectures and applications. However, the good news is that at least at the domestic level, there is increased understanding of the need for governmental push for building the ecosystem, especially if it is to be useful for the smaller players.
Computing infrastructure has emerged as a critical differentiator in this landscape. National labs require substantial computing resources to conduct cutting-edge AI research, leading to government investments in supercomputing facilities and specialized AI hardware. This is not without its costs, as this will create new dependencies and vulnerabilities, with countries competing for limited supplies of advanced semiconductors and specialized AI accelerators. These are areas of technological sovereignty that will have to be figured out through collaborative mechanisms. The same clamour for resources coming from the private sector will not have these questions of technological sovereignty at the centre of the discussion, because the considerations of profit will not necessarily cross the paths of technological sovereignty questions. Hence, government-backed laboratories and institutions will have to consider this tradeoff and devise strategies to work around these limitations.
Geopolitical Tool
The true impact of these national AI labs will extend beyond pure research. Their technical capabilities should increasingly be used in setting technical standards, influencing regulatory frameworks, and shaping AI governance. This will create opportunities for interplay between technical capability and soft power. As a geopolitical tool, this will be indispensable.
Looking ahead, the proliferation of national AI labs suggests a future where AI development becomes more closely tied to national strategic interests. Is that necessarily good or bad? As always, the truth is in nuance. Yes, this could lead to increased technological fragmentation, with different regions developing distinct AI ecosystems aligned with their values and interests. Yes, it could also lead to redundant, wasteful, overlapping efforts unless there is proper coordination within the ecosystem. However, it might also create opportunities for meaningful international collaboration on shared challenges, from climate change to healthcare, built on top of the robust research ecosystems in each nation.
For policymakers and technology leaders, understanding this evolving landscape is crucial. The success of national AI labs will likely depend on their ability to balance competing demands: developing domestic capabilities while participating in global innovation networks. The age-old debates around the role of public and private investment in R&D cannot be cast in stone. They will need to be adapted for ground realities.
Technopolitik: Grok and the Free Speech vs. Regulations Debate
— Adya Madhavan
Grok and the Free Speech vs. Regulations Debate
Globally, there is a shift in attitudes towards freedom of speech and what it means for expression on the internet, with President Trump and Elon Musk being at the forefront of this wave. Recent executive orders reflect the same, with governments and companies prioritising free speech over regulatory safeguards in a bid to reduce government 'overreach'. Most recently, Elon Musk announced that Grok 3, the most recent model of xAI's Grok, will be the most powerful large language model(LLM) yet, surpassing ChatGPT's most advanced functions. A unique feature of Grok is that it has access to interactions between users on X and can tap into them as sources when generating responses.
Integrating the platform with Grok is an interesting choice because of the nature of interactions on social media platforms. Informal interactions on these forums tend to be largely free of moderation and fact-checking and reflect the personal sentiments of users. Recently, Meta announced their decision to stop using fact-checking organisations, and they too have shifted to the mechanism that X uses, where 'community notes' are used for fact-checking– essentially when other users can interact with content and add notes– thus flagging or adding to content for being factually inaccurate. While an algorithm on X is designed to weed out notes by users who are repeatedly bombarding the platform with inputs that may not be relevant to the discourse, their accuracy is limited, as reflected in multiple studies. Overall, users experience mixed results with notes; often, one finds a post on X with a series of unrelated notes, but one study claims that notes helped with accuracy and content moderation.
Coming back to Grok, integrating it with X is a double-edged sword. On the one hand, it allows responses on Grok to reflect public sentiments and bring diverse perspectives to the table However, it comes with ramifications that may be unforeseen. X as a platform allows for instant text, video and image sharing, allowing people to share a lot of what they feel instantly, making it a means for the proliferation of misinformation since posts are sometimes widely circulated without verification. Additionally, the reduced moderation reflected in Grok's responses could lead to the incorporation of polarising or extreme posts. While Grok has safeguards in place to try and prevent this, when the context is more nuanced, AI may struggle to identify inappropriate, inflammatory or just inaccurate content.
For a country like India, which is diverse in terms of religion, culture and language, social media platforms provide an avenue for dialogue that is often representative of conflicting opinions. Most issues in the country also feature groups with polar opposite perspectives, and the discourse on online platforms tends to be heated. Trolling and the perpetuation of hateful sentiments are also not uncommon. People increasingly use LLMs as search engines, and ChatGPT has even integrated a search option that enables it to search the web for responses to prompts. Thus far, Grok is the only LLM that uses any social media platform. The use of posts in shaping responses could potentially impact the tone of responses by Grok and influence what is taken to be true. Even when results include posts from verified sources, media houses and individuals alike tend to reflect a certain tone on X, which can permeate Grok's responses if the LLM’s safeguards are not robust enough.
Two cents on the free speech vs. regulations debate– it doesn't have to be either or. The Indian constitution guarantees freedom of speech under article 19(a); of the constitution, with the caveat that it is subject to 'reasonable restrictions' for reasons such as security and decency. While this can lead to government overreach and a stifling of dissent, there are guidelines in place for content moderation by social media platforms in order to prevent misinformation and hate speech. When it comes to LLMs tapping into social media as a source– for India, this seems like a disaster waiting to happen unless there is transparency about the nature of guidelines and safeguards built into the model that prevent biased narratives and false information from permeating into responses.
In terms of the broader discussion about transparency and accountability, it is hard to know whether the particular X posts that Grok references are due to Grok's own guidelines or government content guidelines imposed on the platform. Amidst concerns about data privacy, LLMs, and bias in AI, it is imperative going forward that there is more transparency when it comes to the guidelines of both governments and corporations. This can equip users to understand what factors are at play and better understand the tools they are working with instead of questioning responses that lie behind opaque algorithms such as Grok's.