Colchester’s airborne medics supported a pioneering US-UK research programme to use AI to support decision-making by medics on the battlefield. 16 Medical Regiment took part in the ‘In The Moment’ project led the U.S. Defense Advanced Research Projects Agency (DARPA), in collaboration with the UK’s Defence Science and Technology Laboratory (Dstl).
The project aims to understand how AI can be used to support military medics responding to mass casualty incidents, and how they decide who to treat first and what care to give.
Scientists from the UK collaborated with the US to test and explore what it would take for medics to delegate high-stakes decisions to AI on the battlefield.
Experts from the Defence Science and Technology Laboratory (Dstl) are collaborating with the U.S. Defense Advanced Research Projects Agency (DARPA) by leveraging hardware and methodologies developed under DARPA’s In the Moment (ITM) fundamental research program.
DARPA’s ‘In the Moment’ (ITM) research program investigates whether the alignment of AI to individual humans affects their willingness to delegate decisions to AI in high-stakes domains. Alignment is encoding AI with human preferences and priorities. AI systems do not naturally align with humans, nor are there methods to quantify individual human decision making, which begs the question of how to align AI to humans. ITM aims to answer how to align with humans and develop technologies to enable such alignment.
Through use of the DARPA ITM program-developed tools and methods, the trials in the UK aimed to help explore the extent to which people are more likely to delegate to someone or something that has the same key decision-making attributes and priorities that they do, and that AI can be “aligned” to individuals’ key decision making attributes. The outcome of the trials, which took place in October 2025 at Merville Barracks in Colchester and Brize Norton in Oxfordshire, are expected to help answer big questions around AI and trust and how understanding these issues can save lives.
An increased confidence in delegating could see larger groups of people triaged and treated more quickly with the decision-making principles of an experienced medic guiding practitioners therefore saving lives.
Dstl Human Factor Specialist Suzy said: “We ran a trial that we have been working on with our American colleagues at DARPA and we’re looking at human-AI teaming in a medical triage setting.
“In the future we’re expecting a lot more information to be coming into the warfighter."
“We’re really interested in how the warfighter makes decisions based on some of this information and how potentially AI systems can help with that.”
The trial investigated what factors may affect decision-making in a medical triage scenario, when there is no “correct” answer. These factors include merit focus (i.e. would a medic treat an injured attacker or victim first), potential quality of life, quantity of life, and affiliation focus preference (i.e. would a medic prioritise someone from a similar military background for treatment, with all injuries being comparable).
This concept was tested in simulated mass casualty scenarios, by first baselining the participants’ key decision-making attributes in desktop scenarios and then in Virtual Reality. AI was then used to assimilate the thought process of a lead medic that was either aligned or misaligned to the participants decision-making attributes. Participants were able to review the responses of the AI and decide if they would trust that “medic” enough to delegate to. They were not told they were dealing with AI until after the exercise.
The post-trial analysis and findings will inform ongoing Dstl research within the Humans in Systems and People Implications of AI research streams, in particular the areas of Human-AI teaming and decision-making.