#74 A Pathway To AI In Governance
Augmenting State Capacity with AI, I Think, Therefore Am I? - Examining Brain Chip Implants Through an Ethical Lens
Today, Bharath Reddy provides a sneak peek into an upcoming Takshashila Discussion Document centred around leveraging AI to augment state capacity.
Rohan Pai follows with an incisive examination of brain chip implants in the wake of Neuralink’s recent human trials.
Also,
We are hiring! If you are passionate about working on emerging areas of contention at the intersection of technology and international relations, check out the Staff Research Analyst position with Takshashila’s High-Tech Geopolitics programme here. For internship applications, reach out to satya@takshashila.org.in.
CyberPolitik: Augmenting State Capacity with AI
— Bharath Reddy
**This previews an upcoming discussion document about identifying areas for augmenting state capacity with artificial intelligence.**
Complex Problems for a State
Vijay Kelkar and Ajay Shah propose that states face their toughest challenges when dealing with processes characterised by a high number of transactions, the necessity for discretion, significant stakes for individuals, and a high level of secrecy.
A high number of transactions implies a greater administrative burden, necessitating more capacity to manage the workload effectively. Increased discretion often leads to ambiguity in accountability and creates opportunities for state agents to engage in rent-seeking behaviour. When individuals have much at stake, they are more likely to invest considerable time and resources to influence outcomes in their favour. Lastly, more secrecy within processes weakens feedback mechanisms, limiting opportunities for constructive criticism.
For example, systems such as the criminal justice system, judiciary, tax administration, and financial regulation rank high on all these dimensions, presenting formidable challenges for state governance.
AI adoption could reduce the complexity of such challenges on dimensions such as transaction volume and discretion, making it easier to overcome state capacity limitations and deliver better governance and public services.
Frameworks for Augmenting State Capacity with AI
Framework #1
Analysing government activities based on transaction volume and discretion offers a useful framework for pinpointing areas that stand to gain the most from integrating AI. Those characterised by high transaction volumes and low discretion, like traffic management, emerge as prime candidates for AI adoption to enhance outcomes significantly. For instance, AI could facilitate tasks such as traffic modelling, traffic light control, detection of traffic violations, and even predictive maintenance. While these tasks are ripe for AI integration, they require substantial investment in infrastructure.
This framework is applicable for the integration of information technology solutions more broadly of which AI can be seen as a sub-field. While conventional IT solutions enable data analysis and decision-making through explicitly programmed rules, AI systems self-learn rules using statistical techniques. AI systems can be prone to errors due to limitations in the algorithms or the data they are trained on. By restricting the application of AI systems to tasks that require less discretion, the risks associated with their deployment can be minimised. For instance, elections, vaccination and the PDS system are areas where adopting conventional IT solutions has improved efficiency and effectiveness
Table 1: Mapping government activities on transaction volume and discretion
Framework #2
Another framework for identifying areas suitable for AI integration is to break down an activity into its constituent tasks and identify those tasks with a high volume of transactions and low discretion. Even within activities requiring significant discretion, like the judiciary, there are numerous low-discretion tasks that can benefit from AI integration. The recent State of the Judiciary report released by the Supreme Court of India has highlighted several tasks that align with this framework. These tasks include translating judgments/orders into various languages, case management, AI-supported roster management, dictating orders and judgments, natural language processing and judicial knowledge management.
In contrast, tasks requiring greater discretion have higher risks. Algorithms for predicting recidivism are often utilised to assess a criminal defendant's likelihood of reoffending. These predictions play a role in pretrial, parole, and sentencing decisions. Research indicates that such algorithms are no more accurate than predictions made by individuals lacking judicial expertise. Likewise, predictive and preventive policing systems, like the one recently deployed in Bengaluru, have demonstrated risks. These include the perpetuation of existing biases, the potential for self-fulfilling prophecies arising from proactive policing in neighbourhoods labelled as "high-risk," and the erosion of due process due to excessive dependence on these imperfect systems.
There are obviously many prerequisites, such as data, hardware, software and communication infrastructure, adherence to data protection principles and risk management frameworks that are necessary for AI adoption. What are your thoughts on this? Do you think it is useful to think of AI augmentation this way?
Clearly, there are several essential prerequisites for adopting AI - including having access to data, software and hardware, a robust communication infrastructure, compliance with data protection principles, and implementing risk management frameworks. Some of these are enabling conditions, but some also help mitigate the risks of AI adoption.
What are your thoughts on this approach? Do you believe it's beneficial to conceptualise AI adoption in this way? Do let us know.
India is set to host the Quad Leaders' Summit in 2024. Subscribe to Takshashila's Quad Bulletin, a fortnightly newsletter that tracks the Quad's activities through the Indo-Pacific.
Your weekly dose of All Things China, with an upcoming particular focus on Chinese discourses on defence, foreign policy, tech, and India, awaits you in the Eye on China newsletter!
The Takshashila Geospatial Bulletin is a monthly dispatch of Geospatial insights for India’s strategic affairs. Subscribe now!
Cyberpolitik: I Think, Therefore Am I? - Examining Brain Chip Implants Through an Ethical Lens
— Rohan Pai
As January drew to a close, the users of X were greeted with a startling announcement by none other than the platform’s new owner Elon Musk. In a post, or formerly a ‘tweet’, the billionaire tech mogul revealed that his brainchild Neuralink had successfully implanted its brain chip prototype in a human being for the first time. Not only did the procedure go off without a hitch, but Musk also claimed that the patient in question could manoeuvre a computer mouse through their thoughts alone.
While previously thought to have remained within the confines of science fiction, such technology has in fact been around since the last century. In 1996, the neurologist Phil Kennedy, sometimes dubbed ‘The Father of the Cyborgs’, began implanting his patented neurotrophic electrode in locked-in patients suffering extreme paralysis. Within a few short years, Kennedy’s test subjects were capable of cognitively spelling out their own names by way of dragging a cursor and selecting individual letters.
It is certainly curious that Musk’s company is conducting human trials for the same set of cerebral functions more than two decades later. One possible explanation lies in the withdrawal of FDA approval for the use of neurotrophic electrodes after Kennedy was unable to address certain safety concerns. The implantation of electrodes, after all, is a highly invasive procedure that requires drilling holes in a patient’s skull, so it should come as no surprise that authorised human experimentation of the same has been and continues to be vastly limited.
Undeterred by these constraints, however, Musk has pushed for the invasive approach because it taps into a significantly greater number of neurons, thus, increasing the bandwidth of data transfer from brain to machine. To this end, Neuralink employs neural probes made up of ultra-fine polymer threads that are each composed of several electrodes. These probes are implanted in various, far-off regions of the brain to maximise the surface area of direct contact between the neurons and the electrodes.
As if there wasn’t enough mechanisation already, Neuralink has gone the extra mile to develop a neurosurgical robot that performs the implantation surgery with an efficiency and accuracy unmatched by humans. Using a needle-pincher apparatus, the robot makes insertions into brain tissue at a speed of close to 200 electrodes per minute.
Musk’s decision to go down the invasive route is possibly a misguided one given that Synchron, one of Neuralink’s prime competitors, uses stent technology that requires merely a small incision into a blood vessel. Although Musk believes such a minimally invasive approach is inferior because it provides limited access to neurons, Synchron has already empowered a number of paralysed patients to surf the web and send texts to their loved ones.
Albeit still in its infancy, brain chip technology has raised eyebrows in legal circles. What helps a quadriplegic express their thoughts today may translate into a gross violation of human rights tomorrow. Such existential fears of machines controlling our very thoughts may appear irrational to the average Joe, but these are very much founded in reality.
Laboratory mice, for example, have already been tested with invasive brain-computer interfaces (BCIs) that manually stimulate feelings of hunger. Moreover, these BCIs have the capacity to induce anxiety in mice through the implantation of false memories.
Nightmarish scenarios like these are made possible through the continuous mining of data from the brain which, unsurprisingly enough, goes hand in hand with Musk’s vision of merging humans with AI. And with the accumulation of data comes the natural next step of commercialisation. Reminiscent of surveillance capitalism, or targeted advertising based on the past online behaviour of a consumer, neurotechnology may eventually encroach on our most private thoughts for the purpose of profit maximisation.
In such a tense social climate, it seems incredibly prudent on the part of Chile to formally introduce into law the notion of ‘neuro-rights’. This constitutional amendment, which received a unanimous vote of approval from the Senate of Chile in 2021, outlines ground-breaking legislation that aims to safeguard a flurry of rights from mental privacy to psychological integrity.
Other countries, such as those in the OAS, have also expressed a keen interest in taking after the Chilean model, but neurotechnological legislation has its fair share of obstacles. Consent, for instance, while generally an adequate marker for ensuring human rights, may become unreliable within its current legal definition given the possibilities of thought manipulation.
In the Indian context, any discussion on the subject of neuro rights would be incomplete without addressing brain fingerprinting. Alternatively known as Brain Electrical Oscillation Signature Profiling (BEOSP), brain fingerprinting is a non-invasive procedure that measures the electroencephalographic brain responses of a patient via a Bluetooth headset fitted with electrodes.
Indian law enforcement has been an outlier on the international stage for using brain fingerprinting as an interrogation technique on multiple occasions. Once the headset is worn, criminal suspects are shown specific images and informed of details pertaining to the crime to track whether their brains exhibit signs of recognition. It’s critical to note, though, that the admissibility of such findings as evidence in court is by no means assured and continues to spark heated debate.
A comprehensive law on neuro rights may be the country’s need of the hour, although some argue that existing legal frameworks are sufficient. For example, the Supreme Court’s landmark Right to Privacy verdict in 2017 explicitly stated that the thoughts and behavioural patterns of an individual were “entitled to a zone of privacy”. But seeing that the CBI conducted brain fingerprinting on those accused in the Hathras rape case a mere three years later, it’s evident that the right to privacy of one’s thoughts remains an illusion.
As with deepfake technology, Musk’s foray into the arena of brain chips is another double-edged sword that may bring about certain benefits at the expense of some future unknown. When uncertainties are at play, which is ultimately a given when it concerns the advent of new technology, transparency and safety are paramount. Musk’s insistence on an invasive implantation procedure, however, coupled with the allegations of needless animal cruelty to speed up research, cast doubt on whether Neuralink will truly have an overall net positive societal impact.
What We're Reading (or Listening to)
[Opinion] States and tax shares: The fight for fiscal space, M Govinda Rao
[Video] The Geopolitics of Semiconductors, ft. Pranay Kotasthane
[Times of India Blog] Why resource distribution is creating a North-South divide, by Pranay Kotasthane and Sarthak Pradhan