#112 AI meets ICU: Revolutionising Critical Care
Today, in her debut piece for Technopolitik, Bhaskari Janardhan writes on developments in artificial intelligence in healthcare in India. Lokendra Sharma follows in this week’s curated section, with a piece on the United States’ tryst with drones.
Technology has become important not just in our everyday lives, but has also become an arena for contestation among major powers including India. The Takshashila Institution has designed the 'Technopolitik: A Technology Geopolitics Survey' to understand and assess what people think about how India should navigate high-tech geopolitics. We are sure you are going to love the questions! Please take this 5-minute survey at the following link: https://bit.ly/technopolitik_survey
Technopolitik: Dozing peacefully with Dozee and Cloudphysicians
— Bhaskari Janardhan
Human supervision 24*7 is a strenuous task. Doctors and nurses are primary decision-makers in medical care, but clinical decision support systems (CDSS) are a great way to complement the medical fraternity in patient care. AI has been increasingly well-leveraged to do this. In intensive care, a multitude of different data is collected around the clock, and AI is used to consistently monitor and compute data to screen for possible events from this incoming data, saving crucial time when making decisions regarding life-saving interventions. The most common events that contribute to mortality in an ICU are sepsis, congestive heart failure, severe community-acquired pneumonia, and stroke.
Predictive vs Actionable AI
AI in ICUs revolves around two aspects: predictive AI and actionable AI. Predictive AI focuses on foretelling outcomes such as sepsis, cardiac events, mortality, etc. Actionable AI focuses on causal inference that helps predict patient outcomes to different treatment decisions, which is then compared, and the best possible outcome is advised. Thus, while predictive AI alerts clinicians to the potential of an adverse event, actionable AI provides the best plan forward that can lead to favourable outcomes in addressing that adverse event.
Although causal inference is conventionally derived from randomised control trials, adjusting for confounding biases (common causes) and selection biases (common effects) is a very complex task. Hence, training is often conducted on observational data based on reinforcement learning models. ICU treatments typically require a sequence of treatment decisions, and frequently, patients present with multiple conditions that need multilinear sequential treatment measures. This presents a unique intricacy to the modelling method and makes causal inference using observational data challenging, requiring various domain expertise. Current efforts include leveraging observational data to mimic conditions of randomised trials to infer causation, thereby benefiting from both models of prediction and causality.
AI in critical care
Some important under-trial applications of AI in ICUs are for:
Optimising mechanical ventilation by monitoring flow rate and airway pressure through ventilation sensors
Predicting sepsis a few critical hours in advance
Early prediction of end-stage renal disease
predicting cardiovascular events in advance
Assisting clinicians with emergency trauma care
Predicting early signs of mortality in admitted ICU patients
Making custom decisions regarding ICU patient’s optimal nutritional status and requirements through variable data integration
Identifying the risk of delirium in admitted patients and its proactive management. Other AI applications that indirectly impact ICU workflow include managing and allocating resources and maintaining electronic medical records. Many of these AI/ML applications are moving beyond predictions to providing actionable insights, which will be the norm in a few years.
AI critical care pioneers in India
Cloudphysicians and Turtle Shell Technologies are two entrepreneurial ventures that have successfully launched AI products for use in critical care in India and abroad. Cloudphysicians have raised $14.5 million in total for their venture. The company manages about 2400 ICU beds across 230 hospitals in 100 cities in India. Their platform RADAR is termed a “co-pilot” for clinicians in critical care by automating the entry of vitals, streamlining patient data documentation, lab reports entry, automated discharge summaries, providing actionable insights and ordering one-touch protocols. This platform also fosters remote collaborations through secure chats, high-definition video feeds, video conferencing, patient monitoring over mobile phones, etc. The RADAR allows the optimisation of workflows and keeps track of any critical changes to shifts or protocols. The company has declared that its services have helped its client hospitals achieve up to 40% reduction in ICU mortality rates.
While RADAR was launched by intensivists themselves, Dozee was launched by a mechanical engineer with a penchant for biology. Dozee, the flagship AI product of Turtle Technologies, is a contactless remote patient monitoring system. It is a smart bed for wards and is based on proprietary ballistocardiography that detects mechanical vibrations produced by every micro and macro body movement to measure cardiac function, pulmonary function, etc. This product is 98% accurate in remote vital monitoring and is an effective early warning system. Dozee is used in over 280 hospitals in India and gained traction during COVID-19 when the strain on the medical system was extreme.
Dozee has now launched Shravan - sensor equipped mats – specifically designed to monitor elderly parents remotely. This allows children to monitor the vitals of their elderly parents from wherever they live and take pre-emptive steps when alerted to any situation. This multi-parameter vital monitoring system has gained an FDA clearance and raised over $ 24 million overall in funding, and it continues to gain support from investors.
Exploring ethics and responsibility in AI deployment
Although the convergence of predictive and causal inference holds great value in the future of ICU management, we best benefit from the guarded deployment of this transformative healthcare mechanism. This is an evolved and special-purpose AI, unlike a general-purpose AI, and it comes with a conundrum of ethics, accountability, and regulatory opacity. At this juncture, since the deployment is fairly in its infancy, clinicians cross-check and make decisions based on recommendations proposed by such AI. A few years from now, when perhaps AI in ICUs become the norm, the question arises of who is responsible in case of a patient’s harm. In a dire situation, must the doctor only choose what the AI suggests? If the doctor decides otherwise and harms the patient, is the doctor accountable? Or does the doctor have decisive functional rights to exercise? If the doctor chooses to trust the AI and follow the suggestion, and the patient collapses, is the AI deployer or developer responsible? Or is it the natural turn of events for a patient in that predicament?
There is one fundamental understanding in the medical domain. AI does not trump human cognition but complements it in making decisions. There are a few basic tenets to follow - ensuring the CDSSs are transparent, disclosing the decision-making systems (data and sources), preventing biases introduced through skewed data sets, rectifying data capture anomalies, and making patients aware of their rights, choices and privacy. There is much left to charter in regulating this ever-morphing tech space. What is needed is a flexible and dynamic regulatory model that can be modified iteratively to cater to the pace with which the technology evolves without stifling innovation, technological support to the medical fraternity, or, most importantly, intensive care to those in need.
Silent Sentinels: The US’ tryst with drones
— Lokendra Sharma
The East Coast in the US is abuzz with concern about unexplained drone sightings. Since mid-November, New Jersey and other states, such as New York and Pennsylvania have seen bright lights in the night sky. They are widely believed to be unmanned aerial vehicles (UAVs) or Uncrewed Aerial Systems (UAS), or simply drones — while all three terms may have some minor differences, they are generally used interchangeably. The state and federal authorities continue to investigate these sightings. Smaller, lighter drones flying at a low altitude are very difficult to detect by traditional radars; enforcing a regulatory regime for drones is equally difficult.
Drones have become a concern not just for the residents of New Jersey but also for policymakers in Washington, Beijing, Moscow, New Delhi and elsewhere. All powers — hegemons, challengers and emerging — are integrating drones into their war-fighting strategies. China is not just the largest manufacturer of commercial drones in the world but also a serious challenger to the US when it comes to military drones. The updated Feihong FH-97A drone launched by China has opened a new front with the US.
But China did not fire the first metaphorical shot. The first major player to heavily use drones for military operations was the US itself. Did this lead to the proliferation of drones globally? If yes, what are the implications for the US grand strategy?
Francis N. Okpaleke has answers to the above questions in the eighth chapter of his book:
Okpaleke, F. N. (2023). The Implications of Drone Proliferation for US Grand Strategy. In Drones and US Grand Strategy in the Contemporary World (pp. 211-239). Springer. https://link.springer.com/chapter/10.1007/978-3-031-47730-0_8
Answering the first question above, Okpaleke argues that it was the US demonstrating to the world the utility of using drones in the battlefield as it waged the war on terror in West Asia. Referring to ‘America’s use of drones post-9/11,’ Okpaleke notes that this usage ‘inadvertently demonstrated the potential transformative utility and military significance of their weaponry for modern warfare.’ While the US and Israel dominated the drone sector till 2010, China has been catching up and even challenging the US dominance for more than a decade now. Okpaleke goes so far as to assert that China has an edge over the US in global drone proliferation. That China imposes fewer end-user restrictions as compared to the US is cited by the author as a reason behind China’s edge.
Okpaleke’s primary argument is that ‘the continued spread of drones, particularly among world powers and non-state actors, triggers potential adverse implications for US strategic objectives’ globally as well as nationally. The proliferation of drones to non-state actors in addition to the weaponisation of AI, only compounds the problem. Going by Okpaleke’s analysis, the US might have over the last two decades, helped contribute to a proliferation challenge that looks poised to hurt US interests in the long-term.
What We're Reading (or Listening to)
[Podcast] Engaging with Sino-India Disengagement, by Amit Kumar and Anushka Saxena
[Opinion] AI and data centres: Misplaced focus in the energy demand By Rakshith Shetty
[Opinion] Why India needs a techno-strategic doctrine, By Pranay Kotasthane