#125 Thorium-Based Nuclear Power- The Future of Energy?
In this edition of Technopolitik, Anwesha Sen talks about the potential of Thorium-based nuclear reactors. Adya Madhavan follows, with a piece on Lethal Autonomous Weapons Systems.
This newsletter is curated by Adya Madhavan.
Technopolitik: Is Thorium the Nuclear Key to a Sustainable Tech Future?
— Anwesha Sen
Thorium-based nuclear power - the future of energy?
In a historic development, China has reportedly become the first country to bring a thorium nuclear reactor fully online, marking a major milestone in clean energy innovation. The reactor promises increased safety, efficiency, and waste reduction - critical advantages in an era of accelerating energy demands from data-intensive sectors like artificial intelligence (AI). Moreover, thorium offers a more accessible but less weaponisable alternative to uranium, according to the World Nuclear Association.
Meanwhile, in India, the Maharashtra state government has signed a memorandum of understanding (MoU) with Russia's state-owned nuclear agency, ROSATOM, to co-develop a thorium-based Small Modular Reactor (SMR). This is a notable step, not just for India's energy independence, but for its strategic move into advanced nuclear technologies. Thorium is far more abundant in India than uranium, and SMRs offer scalable, low-carbon power that can be deployed even in remote regions.
What connects these developments is more than just nuclear innovation. It’s the race to power the AI ecosystem sustainably.
AI, especially large-scale models, is becoming one of the most energy-hungry domains of the tech ecosystem. Data centres now consume more electricity than some small countries, and with the explosion of generative AI applications, energy demand is only set to rise. Clean, reliable, and scalable energy sources are now integral to maintaining AI growth without worsening the climate crisis.
India, by leveraging thorium and collaborating with a long-time nuclear partner in Russia, signals a strategic pivot to clean tech self-reliance, with implications for the localization of AI infrastructure and reduced dependence on fossil fuels.
For policymakers, the message is clear: energy strategy is tech policy. As AI becomes more intertwined with national competitiveness, the nations that can fuel AI sustainably will shape its future direction. These moves by China and India may represent the beginning of a broader realignment, where nuclear energy and AI development become mutually reinforcing levers in global power competition.
Technopolitik: The Definition Dilemma Roadblock
— Adya Madhavan
There's considerable discussion around 'killer robots' each time a nation further incorporates autonomous capabilities into its military forces. Yet the lack of agreement on the fundamental nature of these technologies hinders discussions on what they are and how they can be governed.
Lethal Autonomous Weapons Systems (LAWS), often popularly referred to as killer robots, represent a potentially transformative shift in military technology. If deployed, they could reshape the dynamics of conflict. However, the conspicuous absence of a universally agreed-upon definition of LAWS impedes the progression of dialogue concerning their implications and governance.
It's important to note that conversations on LAWS have occurred. The UN Convention on Certain Conventional Weapons (CCW) has served as one key forum for these exchanges since 2014, and numerous countries have voiced their increasing concerns. Nevertheless, a globally accepted definition remains elusive. This definitional challenge significantly obstructs the establishment of international regulatory frameworks or negotiations aimed at prohibiting LAWS. While smaller coalitions, such as the one spearheaded by Austria, advocate for a legally binding international instrument to restrict the development and use of LAWS, progress beyond initial discussions has been limited.
Several widely cited definitions share common elements, emphasising autonomy and the absence of direct human intervention. For instance, the International Committee of the Red Cross defines LAWs as "any weapon system with autonomy in its critical functions, capable of selecting (i.e., searching for, detecting, identifying, tracking, selecting) and attacking (i.e., using force against, neutralising, damaging, or destroying) targets without human intervention."
The apparent difficulty in reaching a consensus on a definition arises from states prioritising their own interests. Technologically advanced nations with the potential to develop LAWS tend to favour definitions with a higher threshold, allowing their ongoing development. Conversely, states lacking this capacity or strongly opposing LAWS on humanitarian grounds advocate for more restrictive definitions. However, as some states participating in the CCW have argued, a precise definition may not be a prerequisite for advancing discussions on governance. The current lack of a definition also complicates advocacy for specific stances on LAWS, as the ambiguity surrounding their nature leaves room for varied interpretations. Consequently, this hinders conversations about establishing necessary human oversight in the potential utilisation of LAWS.
This definitional challenge is not unprecedented; many technologies have proven difficult to define initially. In the cases of anti-personnel land mines and biological weapons, the initial focus was on prohibition, driven by a general understanding of their potential for devastating consequences, even before a formal definition was established. While definitional clarity can aid policymakers in specifying what is prohibited or regulated, absolute precision may not be essential for achieving normative prohibition. Although the absence of such clarity might allow states to exploit loopholes in international frameworks and humanitarian law, the conversation should not be indefinitely delayed. A strong normative consensus can still play a crucial role in preventing the use of technologies in ways that cause severe harm.
For India, a nation in the early stages of developing autonomous technologies for military applications, defining and approaching LAWS presents a complex challenge with significant implications for its future military modernisation. On one hand, it aligns with India's interests to limit China’s actions, which remains a persistent security concern. On the other hand, considering India's regional security challenges, maintaining the option to develop some form of autonomous weapons in the future – even as a deterrent – could be seen as advantageous.
Why should India clearly articulate its position on LAWS? Because waiting for a universally agreed-upon definition risks paralysing essential discussions, and prolonged inaction increases the potential for LAWS to be deployed in conflict. With both China and Pakistan at various stages of developing autonomous technologies, India is at a critical juncture in influencing the trajectory of this discourse. India's past experience in negotiating arms control agreements related to cybersecurity governance and nuclear non-proliferation can be valuable in impacting the conversation about LAWS and ensuring strategic stability.
China has already developed an autonomous artificial intelligence commander for war games and has indicated its intention to integrate AI with its Unmanned Combat Aerial Vehicles (UCAVs), such as the sophisticated Wing Loong 9. As China progresses in this domain, India is understandably focused on its own military advancement. However, it is also crucial for India to actively participate in shaping the discourse surrounding LAWS.
Consider a hypothetical scenario where China targets critical Indian security infrastructure using a fleet of UCAVs and subsequently targets Indian military leadership with LAWS. In such a situation, India would undoubtedly benefit from having previously established a clear position on LAWS. Currently, the level of autonomy typically associated with lethal autonomous weapons is so high that any system involving a degree of human oversight is generally excluded from this category. Yet, a system could identify a target through image recognition, utilise autopilot technology for engagement, and deploy a missile with only minimal human involvement. Israel's 'Lavender' system, employed for target identification, reportedly involved human oversight, but investigations revealed a largely symbolic level of human intervention, with an algorithm conducting the indiscriminate identification of Palestinian targets. This technology, combined with autopilot software and an autonomous navigation system, could effectively perform all functions included in most definitions of LAWS, while still maintaining a superficial level of human control.
In anticipation of such developments, countries must clarify their own definitions of 'lethal' and the meaning of 'autonomy' in the context of lethal autonomous weapons, even in the absence of a globally accepted definition. Conversations about arms controls for LAWS can’t wait till a consensus is reached.
Apply here!