#115 Decoding a Quantum Future: Lessons from the Willow Announcement
In this edition of Technopolitik, Rijesh Panicker writes on Google’s recent announcement of their Willow computer. Ashwin Prasad follows, with a piece on Axiom Space. Finally, in this week’s curated section, in a timely piece Lokendra Sharma writes on India’s new data protection laws. This newsletter is curated by Adya Madhavan.
Technopolitik: From Qubits to Quantum Futures
— Rijesh Panicker
Nvidia CEO Jensen Huang’s comments at the recent CES analyst day that very useful quantum computers are at least 15-30 years away, led to nearly $8 billion being wiped off the market capitalisation of quantum computing firms like D-Wave, IONQ and Rigetti Computing. While D-Wave Quantum CEO Alan Baratz later pushed back on Huang’s comments, suggesting that D-Wave had commercial solutions on the market today, he did agree that gate-based quantum computers could be decades away. This is important because D-Wave’s approach to quantum computing, which uses a method known as quantum annealing, is not regarded as a universal computer and is believed to be useful only for solving a specific set of optimisation problems.
So, how far are we from useful universal quantum computers? Google’s latest announcement of their Willow quantum computer gives us an interesting look at the frontier of research in this area. These new experiments suggest a robust path to creating more stable, longer-lived logical qubits, allowing us to scale up despite lower fidelity physical qubits.
Physical qubits are created using different methods, with superconducting qubits and trapped ions being among the more popular methods. Qubits are generally susceptible to any interference from the environment and as a result most physical cubits today exhibit error rates in the 0.1-1% range. A logical qubit is represented by a set of underlying physical qubits which replicate the same information using a form of quantum entanglement. By applying quantum error correction algorithms to these physical qubits, we may be able to create lower error rates in the logical qubit, thus allowing longer calculations to take place.
Google’s recent results with Willow demonstrate precisely this. Using a specific type of quantum error correction algorithm called a ‘surface code’ - where a grid of d x d physical qubits are used to represent 1 logical qubit, Google demonstrated the ability to reduce the error rate as the physical qubits scaled up. Specifically, they used a 105 qubit chip (Willow) and demonstrated that as larger surface codes (d=3 to d=7) are used, there is an exponential drop in the logical error rates achieving below 10-3 at d=7, with only a polynomial increase in the number of physical qubits (2d2- 1).
In addition, Google was able to demonstrate real-time error correction on the logical qubit when using a distance-5 (d=5) surface code and a reduction in correlated errors, such as leakage errors and errors caused by cosmic rays, through an allied set of error correction codes called ‘repetition codes’. add
Despite the progress however, this is just a small step in a long road to fully fault tolerant quantum computers (FTQC). For one, the error correction has only been implemented for a subset of the quantum gates (operations). Until we implement error correction for a T-gate (Tiffoli gate), the exponential speedups that we expect from quantum computing will be unavailable at a large scale.
Second, practical applications of quantum computing will need error rates in the 10-6 to 10-10. This will require between 1,500 and 4,000 physical qubits, respectively, and we will need large surface codes (d=27 and d=39) in each case, even before considering what we need to provide error correction for T-gates. Google’s own roadmap suggests that they would like to scale up their chips to 10,000 physical qubits (IBM’s largest chip has 1027 qubits today, and they have stopped trying to scale them).
What this means is that each individual chip will at best be able to support 2-3 logical qubits, which brings us to the challenge of interconnects. Transferring quantum data between the quantum chips using technologies such as microwaves or optical photons will need to be reliable and demonstrate minimal losses so that logical qubits can span chips and we are in the early stages of readiness for such technology.
To get a sense for how far we are from the most talked about use case for quantum computing, namely decryption - to break a 2048 bit RSA key using Shor’s algorithm, its estimated that we need about 4000 logical qubits and an error rate of 10-12. In other words, we would need close to 16 million physical qubits or 160K Willow chips.
As an aside, the reported performance on benchmarks that claims the ability to solve in a couple of minutes what will take a classical computer 1025 years should be taken with a pinch of salt. The benchmark used (Random Circuit Sampling) has no practical application and so it does not indicate any ability to solve real-life problems.
To end on a positive note, despite all of the challenges noted, Google’s demonstration is an important step forward in showcasing logical qubits, similar to other such demonstrations from Microsoft/Quantinuum and others. In addition, other alternative methods to surface code-based quantum error correction are being researched by players such as IBM. It will be interesting to see what progress we make on the various engineering challenges that underlie quantum computing and how they impact our ability to build larger, more practical systems within the next decade.
Antarikshmatters: Axiom Meets ISRO
— Ashwin Prasad
We have all heard of Axiom Space. It is a private company building space infrastructure, and it will send Shubhanshu Shukla, the Indian astronaut, to the International Space Station (ISS). This happened through a deal signed between the Indian Space Research Organisation (ISRO) and Axiom Space last year.
The company made headlines again a couple of months back for considering rockets from ISRO and Indian private companies for its space station project. They’re working on what might be the first-ever commercial space station—Axiom Station—slated to launch when the ISS retires in 2030.
At first glance, I felt like this signals the start of a new, promised era of a global commercial space economy with private companies teaming up across borders to build things together. But looking closer, this is about government partnerships more than anything else. If you read ISRO's press release, you will see that they frame the agreement as a part of a bigger joint ISRO-NASA effort. ISRO introduces Axiom as the NASA-identified space provider.
I was intrigued by the reports of Axiom exploring the use of Indian rockets. These rockets don't really offer many advantages over American ones, except maybe helping Axiom avoid putting all their eggs in one basket with space transport services. Ultimately, to me, this collaboration also seems to be driven by the US and Indian governments pushing for closer space partnerships. Overall, it looks like the governments are getting along great, but we're still waiting to see businesses naturally working together across the borders.
New rules for navigating India’s privacy maze
— Lokendra Sharma
For the entirety of 2024, there was little happening on the data protection front. But 2025 began right with a bang: on 3 January, the Ministry of Electronics and Information Technology (MeitY) released the draft Digital Personal Data Protection Rules, 2025 about 15 months after the Digital Personal Data Protection Act, 2023 (DPDP Act) was enacted. Released in both Hindi and English, MeitY has invited public comments on the draft till 18 February, following which a non-attributive summary will be published based on the comments received.
The rules have caused some consternation in the Indian legal and research communities. The Internet Freedom Foundation has called the rules ‘too little, too vague and too late’; Anwesha Sen, an assistant programme manager at the Takshashila Institution, has characterised it as a ‘missed opportunity’; Jhalak M. Kakkar and Shashank Mohan from the Centre for Communication Governance have highlighted the ‘vague, incomplete, and rushed’ nature of the rules despite the long drafting period.
While most of the opinion pieces have analysed different components of the rules, such as those pertaining to obligations of significant data fiduciaries, exemptions, consent, reasonable security safeguards and children’s data, no analysis of India’s privacy maze is complete without some reference to (or comparison with) the EU’s approach to data protection. The General Data Protection Regulation of 2016 firmly established the EU as a norm entrepreneur in the privacy/data governance universe. Contrasting the rules with what the EU does, Vivan Sharan and Srishti Joshi from the Koan Advisory Group argue that ‘India’s measured approach thus far offers a refreshing alternative to Europe’s interventionist policies.’
However, one grouping that is rarely ever compared and contrasted against is the BRICS. What actions have BRICS members undertaken over the years on data protection? Is there some convergence? If so, does it help? The following paper stands out in answering these questions:
Luca Belli, Danilo Doneda, Data protection in the BRICS countries: legal interoperability through innovative practices and convergence, International Data Privacy Law, Volume 13, Issue 1, February 2023, Pages 1–24, https://doi.org/10.1093/idpl/ipac019 (currently freely accessible)
Belli and Doneda’s paper emerges from CyberBRICS, a collaborative project that sets out to map the cybersecurity and data protection landscape in the BRICS states. From Brazilian General Data Protection Law (adopted in 2018), China’s Personal Information Protection Law (adopted in 2021), Supreme Court of India’s judgement recognizing privacy as a fundamental right (2017), South Africa’s Protection of Personal Information Act, 2013 that entered into force fully in 2021, to Russia’s data localisation endeavours in 2019/2020 — Belli and Doneda discuss what BRICS states are doing in the data protection realm. Their assessment includes a thorough comparative analysis of BRICS states on international data-sharing requirements as well.
Belli and Doneda find that notwithstanding certain unique aspects of each state’s data protection approach, ‘it is evident that BRICS perspectives over personal data protection largely align, and their frameworks are already compatible, even in the absence of a formal agreement.’ The authors see BRICS’ approach to data protection emerging as a model for other states, particularly in the Global South, to follow. One interesting observation that emerges from their paper is that Brazil, Russia and South Africa are more aligned with the European approach, while India and China ‘have decided to pursue their own regulatory models’.
What We're Reading (or Listening to)
[Opinion] DPDP Rules and a missed opportunity by Anwesha Sen
[Opinion] Soaring over traffic, By Avinash Shet
[Opinion] India’s journey so far on the AI military bandwagon, by Adya Madhavan
[Podcast] AI-Led Predictive Policing: Safety or Illusion? by Anwesha Sen and Disha Verma