#116 Fusion Frenzy: The Startup Revolution in Commercializing Nuclear Energy
In this edition of Technopolitik, Sridhar Krishna writes on private-sector investments in nuclear energy. Kumar Vaibhav follows with a piece on Meta’s AI Content labelling. Finally, in this week’s curated section, Lokendra Sharma talks about cyberattacks. This newsletter is curated by Adya Madhavan.
Technopolitik: Private Innovation in Fusion Energy
— Sridhar Krishna
There's been an unprecedented surge in private investment in fusion energy, with companies attracting over $7 billion globally in the past five years. This influx of capital has led to the emergence of numerous startups developing diverse technologies and reactor designs, aiming to connect fusion power plants to the grid in the coming decades. These startups are pursuing innovative approaches to achieving commercially viable fusion power.
This showcases the various innovative approaches being explored to achieve commercially viable fusion power. The unique technologies and designs being developed by these companies highlight the dynamism and potential of the fusion energy sector.
Commonwealth Fusion Systems (CFS):
CFS focuses on magnetic confinement fusion using a compact tokamak design called "ARC".
Their key innovation lies in leveraging high-temperature superconductor (HTS) magnets. These advanced magnets allow them to create smaller, more efficient reactors, reducing capital costs compared to larger, traditional tokamaks.
CFS aims to demonstrate net energy production from its SPARC fusion system by 2025 and build a small fusion power plant based on the ARC tokamak design.
They are also partnering with MIT to utilize their new HTS magnet technology.
Tokamak Energy:
Tokamak Energy is also pursuing magnetic confinement fusion using a tokamak approach.
They recently introduced groundbreaking cryogenic power electronics technology for their superconducting magnets. This innovation reduces cooling costs by 50% and enhances reactor efficiency, making fusion more viable as a sustainable energy source.
First Light Fusion:
First Light Fusion is an inertial fusion company that originated from the University of Oxford.
They employ mechanical compression using a hyper-velocity gas gun and electromagnetic propulsion device called "Machine 3". This approach aims to achieve net energy gain faster and cheaper than other methods.
Focused Energy:
Focused Energy specializes in inertial fusion energy (IFE) using high-power laser beams to drive fusion reactions.
They are the only developer using an inertial confinement concept among the eight fusion companies selected by the Department of Energy for the Milestone-Based Fusion Development Program.
Marvel Fusion:
Marvel Fusion utilizes a direct-drive laser fusion approach, directly targeting the fuel capsule with lasers, unlike the indirect approach used by the National Ignition Facility at Lawrence Livermore National Laboratory.
They are still in early stages, focusing on computer simulations and modeling to refine their approach.
Avalanche Energy:
Avalanche Energy is developing a fusion microreactor called "The Orbitron" using a magnetized target fusion (MTF) approach.
They compress deuterium-tritium plasma fuel using magnetic fields generated by pulsed-power systems in their compact, cylindrical reactor design. Their approach aims to achieve net energy gain while overcoming limitations of earlier MTF concepts.
Type One Energy:
Type One Energy employs a magneto-inertial fusion (MIF) approach, combining pulsed magnetic fields and inertial compression from electromagnetic linear motors.
They aim to achieve net energy gain by compressing hydrogen-boron plasma fuel, potentially requiring less extreme driver conditions than purely magnetic or inertial fusion methods.
HB11 Energy:
HB11 Energy focuses on a unique approach using plentiful hydrogen and boron B-11 fuel and precise laser application to initiate fusion reactions.
Their method eliminates the need for rare, radioactive fuels like tritium and the requirement for extremely high temperatures.
nT-Tao:
nT-Tao is an Israeli firm and is developing a compact and scalable nuclear fusion system using proprietary ultra-fast heating technology for high-density plasma.
Their technology is expected to accelerate commercialization timelines.
Helical Fusion:
Helical Fusion a Japanese company, distinguishes itself by using a helical magnetic confinement approach, offering advantages in stability, containment, and efficiency compared to traditional tokamak designs.
Renaissance Fusion:
Renaissance Fusion, located in France, concentrates on stellarators, a type of fusion reactor with a more complex but inherently stable magnetic field configuration.
They aim to overcome the traditional challenges associated with stellarator design, such as the intricate design of magnetic coils. Stellarators have advantages like higher off-the-plug efficiency and the ability to operate continuously.
Realta Fusion:
Realta Fusion is developing compact magnetic mirror fusion generators, aiming for the lowest capital expenditure and least complex path to commercially competitive fusion energy.
They leverage advancements in superconducting materials, plasma physics, and computing power to optimize a simple linear fusion reactor configuration.
Lawrenceville Plasma Physics (LPP):
LPP uses a unique approach called "Focus Fusion," employing a dense plasma focus device to create fusion reactions.
Their approach differs from the conventional tokamak method and offers a smaller, less expensive option for commercial fusion energy production.
Proxima Fusion:
Proxima Fusion focuses on developing power plants using QI stellarators, which create magnetic cages for high-energy matter.
Their optimized, quasi-isodynamic stellarators offer intrinsic stability and potential for continuous operation. They are designed to eliminate persistent electric currents in the confined plasma.
Gauss Fusion:
Gauss Fusion utilizes high-field magnetic confinement fusion technology, considered the fastest approach to fusion energy production.
NK Labs (Acceleron Fusion):
NK Labs is developing muon-catalyzed fusion, a process that can operate at much lower temperatures than traditional approaches.
They aim to improve the efficiency of muon production to enable cost-effective muon-catalyzed fusion. Their first fusion device, the Active Target Muon Source (ATMS), seeks to optimize muon production efficiency.
Blue Laser Fusion:
Blue Laser Fusion employs a novel high-power laser technology to achieve a high repetition rate and power for clean energy generation.
They plan to use an aneutronic reaction with a mixture of high-gain solid fuel target materials for sustainable and environmentally friendly operations.
Crossfield Fusion:
Crossfield Fusion develops a compact fusion reactor using a patented technology called the "Epicyclotron".
Their current focus is on applying this technology in the fusion fuel cycle.
Electric Fusion Systems, Inc.:
Electric Fusion Systems is pioneering a new approach using proton-lithium Rydberg matter fusion fuel with a pulsed electrical stimulation breakthrough.
This combination aims to achieve more efficient and affordable fusion power generation. They are developing a compact and portable fusion power generator.
NearStar Fusion:
NearStar Fusion takes a simplified, modular approach to fusion energy, focusing on a pulsed fusion method to avoid complexities associated with steady-state plasma.
Their design prioritizes rapid development and mass production of fusion drivers, fuel capsules, and reaction chambers.
Which of these companies do you think will get to the finish line first?
Technopolitik: Meta’s New Policy: A Label Too Little, A Fix Too Late?
— Kumar Vaibhav
As we open the Facebook app, it might be hard to escape the vast pool of AI-generated content created by users. This AI content, whether harmful or benign, often contains strong imagery designed to evoke intense emotions among viewers. Generative AI, due to its user-friendly nature and ease of access, can easily be exploited to spread misinformation by anyone. This misuse can have unimaginable consequences, especially during critical situations such as elections, political unrest, pandemics like COVID-19, and other similar events.
Given the overdue need to regulate AI content and address Oversight Board criticisms regarding its narrow approach to flagging manipulated media, Meta has updated its policies across Facebook, Instagram, and Threads. Meta’s new policy includes the following measures:
1. "AI info" labels to identify AI-generated or edited content using industry indicators and self-disclosures.
2. High-risk content will feature prominent labels to inform users of potential deception.
3. Content flagged as false by fact-checkers will be demoted and labelled with warnings, while content violating policies on voter interference, harassment, or violence will be removed, regardless of AI involvement.
4. Ads with AI alterations require disclosure and will be rejected if they contain debunked information.
Nonetheless, experts remain skeptical about the effectiveness of such labelling initiatives. “There’s a lot that would hinge on how this is communicated by platforms to users,” said Gili Vidan, an Assistant Professor of information science at Cornell University. “What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?”
Empirical studies on warning labels which suggest that such interventions can reduce belief in and engagement with misleading content. However, gaps persist in Meta’s capacity to detect and label AI-generated material accurately.
The vulnerability of the harmful detection of Meta was exposed as prior to India's election, as despite Meta's promises to prioritise the detection and removal of violative AI-generated content, acknowledging concerns about the use of AI to spread misinformation it was revealed that Meta's systems failed to identify or label several provocative advertisements as AI-generated. Given the platform’s detection capabilities are limited to major commercial AI tools Meta’s labelling system could create a "false sense of security" for users as the, leaving content from lesser-known or custom tools unchecked.
In addition to the limited capacity of Meta's detection of harmful AI-generated content, its decision to discontinue CrowdTangle, which was a reliable and accessible fact-checker on its platform, and replace it with Content Library, whose access is just limited to journalists and civil society organizations has raised concerns about providing checks to misinformation on the platform.
Concerns still persist about the descriptive nature of the tag and whether it effectively conveys the intent behind the label. Without a clear explanation of the tag’s purpose, users may not fully understand its significance. The Oversight Body has criticized Meta’s approach, describing it as incoherent, lacking persuasive justification, and overly focused on how content is created rather than on the specific harms it aims to prevent. Consequently, it remains unclear whether the new policy on AI labelling effectively addresses these criticisms.
Given the visible vulnerabilities in Meta’s AI labelling system, implementing necessary amendments could significantly enhance its effectiveness in detecting harmful AI content and drafting sufficiently descriptive and purposeful labels. Meta can also leverage its users to moderate AI-generated content, similar to the "Community Notes" feature on the social media platform X. In this system, users can sign up as contributors to add contextual notes to posts, which remain visible to everyone, unlike labels that may not be fully visible. These notes appear only when rated as helpful by a diverse group of contributors, ensuring a balanced and unbiased approach. Adopting a similar model could help Meta effectively recognize harmful AI-generated content.
Although Meta claims that AI-generated content constitutes less than 1% of misinformation, there is a pressing need for a robust and effective regulatory framework to address it on its platform. This necessity arises from the growing influence of generative AI, which, despite being described as in its nascent stages, is rapidly solidifying its role as a medium for information dissemination and a channel for creative expression. The harmful impacts of AI should not be downplayed since the prospect of a future where AI is exploited to propagate widespread misinformation is not difficult to imagine.
There are no saints in the cyber world
— Lokendra Sharma
The telecom sector is in the news again, but this time not because of the net neutrality debate or the over-the-top players versus telco tussle over revenue sharing. It is because the US has been reeling with — in addition to being anxious or excited about Trump 2.0 — what has been described as the biggest telecom cyberattack in US history.
It first emerged in September 2024 that ‘Salt Typhoon’ — a malicious group/campaign linked to China —- had intruded into the US internet service providers’ (ISP) networks. But this may not be the name the group allegedly behind this cyberattack may call itself. Microsoft has developed a taxonomy for uniformly naming threat actors. These names are then picked and amplified by the information production and dissemination channels in the US. Thus, for China it is Typhoon, for Iran it is Sandstorm and for North Korea it is Sleet. On distinction within actors linked to a state, Microsoft adds: ‘Threat actors within the same weather family are given an adjective to distinguish actor groups with distinct tactics, techniques, and procedures (TTPs), infrastructure, objectives, or other identified patterns.’ This is how emerges the name of Salt Typhoon, a third China-linked group identified within a span of months. Previously, Flax Typhoon has been accused of being behind cyberattacks on organisations in Taiwan and the US, and Volt Typhoon has been accused of targeting critical infrastructure in the US.
But the full extent of the Salt Typhoon cyberattack only became clear (publicly) in December 2024. As Rishi Iyengar for the Foreign Policy described it succinctly: ‘The hackers infiltrated at least eight major U.S. telecommunication networks, including AT&T and Verizon, targeting the cellphones of several government officials and politicians, including President-elect Donald Trump and Vice President-elect J.D. Vance.’ If the extent and attacker are accurate, then it does serve as a vindication at some level for all those who advocated for barring Chinese giants Huawei and ZTE from supplying equipment for 5G expansion in the US, India and elsewhere.
If headlines from major news publishers in the English-speaking world were to be believed, China comes out as an usual suspect causing unnecessary cyber mischief in the West. But how do we square this with the Snowden revelations that exposed how pervasive globally is the US (and by extension, the five eyes) capabilities to conduct cyber espionage and even beyond? Or, how do we square it with the US aggressive strategy of forward defense in cyberspace?
Harry Oppenheimer in his following 2024 paper for the prestigious Journal of Peace Research dwells on the biases that prevail in our understanding of the cyber domain:
Oppenheimer, H. (2024). How the process of discovering cyberattacks biases our understanding of cybersecurity. Journal of Peace Research, 61(1), 28-43. https://doi.org/10.1177/00223433231217687 (open access)
Oppenheimer’s paper goes to the very core of how researchers discover cyberattacks that are publicly known and draw inferences from them. He identifies the main issue to be visibility bias. This occurs because some cyberattacks get amplified (such as those that are easier to observe or are claimed by the attackers themselves) while others go invisible (such as those that are very difficult to detect because the attacker used sophisticated methods). If China is increasingly characterised as the leading player in cyberwarfare while the US is portrayed as a responder or as a catching-up player — it is because of the visibility bias. As Oppenheimer asks: ‘If United States’ methods heavily target confidentiality and avoid availability, it would be harder to observe their actions, and it would take longer for cases to surface.’
By employing survival models and a dataset of cyberattacks, Oppenheimer finds that the delay between a cyberattack and its disclosure has to do with who’s behind the attack — state or non-state actor, that is — as well as technical characteristics. Information confidentiality, integrity, or availability are the technical characteristics in question. His primary argument is ‘that under-reported attacks most likely involve state actors and affect information confidentiality or integrity.’
So the next time you read about a cyberattack being mounted by China, Iran or North Korea on the helpless and restrained West, do take it with a big pinch of salt. There never were, and are not likely to be any saints in the cyber domain.