A British deep tech start-up, Hadean, secured a contract with the Defence and Security Accelerator (DASA), an agency within UK Ministry of Defence (MoD), to populate the British Army’s virtual training space.

The company already has relations with the Army, having constructed a cloud-based training platform as part of the Army’s Collective Training and Transformation programme.

Having validated a cloud-distributed platform for collective training, the Army now wants a more detailed virtual environment. So, for the next year Hadean will create a large language model that will meet the requirements outlined in the MoD’s call for a “complex, representative human terrain.”

One that will enable troops to ingest a “free-thinking A3E [Audiences, Actors, Adversaries and enemies] capability [that] delivers cues, stressors and frictions across the human, physical, environmental and information domains.”

The leading intelligence consultancy, GlobalData, determined that this generative artificial intelligence (AI) tool has grown faster than any other technology, as the company expects it to “upend and transform businesses across sectors with lasting impact.”

In an exclusive interview, Hadean’s vice president of innovation, Chris Arthurs, spoke to Army Technology to describe how the company plans to apply its own generative AI product to the military’s virtual training space.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

This is an application, he noted, “that certainly hasn’t been tried with this technology before, and I would argue it’s never really been achieved before.”

Chris Arthurs, vice president of innovation at Hadean. Credit: Hadean.

Using generative AI to populate a virtual environment

“The central thesis is that historically – and this has been a problem for a while, it’s been talked about before now – virtual training environments started [off] being very empty,” Arthurs observed.

“So they’ve had a Red Force and a Blue Force, maybe some buildings, but then no people, no civilians, no sheep, no cattle, no birds, or all these sorts of things that bring an environment to life.”

Supported by a large language model at its core, Hadean will use AI to understand the evolving context during a Live, Virtual, Constructive training exercise.

This enables the simulated A3ES to respond dynamically to the details of the actions of the trainees. These in turn will cause reactions and prompt training personnel to adjust their approach and tactics, effectively bridging the physical and virtual worlds through real-time interactions.

“One of the things that we’re doing around using the language models is generating a social media feed, on something like X, or Twitter, feed. That’s something that we have a capability for in the Hadean product, and that’s generated using logic language models, but we want to enhance that.

“That’s one of the key things as part of the project: the models are great at generating a little snippet of text that looks like a tweet, and being able to have that respond directly to the things which are happening in the training environment as they take place.”

Hadean’s social media tool provides an up-to-date representation of the ‘pattern of life entities’ beyond the detailed parts of life during a conflict. It also provides a useful – and due to the nature of social media, perhaps intentionally unhelpful – intelligence base that would, in reality, inform military operations in real time.

Arthurs went on to describe such a scenario: “So events that might occur, statements on the radio from the soldiers about what they’ve seen can be fed in to a language model.

“It can be told to pretend that it is a civilian observing these things, but it can’t hear the radio messages, that issue tweets about it. So the soldiers say, ‘I’ve just seen four red team tanks driving down that street, on the radio,’ and then that gets fed in and another source can start saying, ‘Oh no, I’ve seen those tanks going down Main Street.’”

Rendition of three troops discussing logistics. Credit: Hadean.

Getting the AI right

In combination with AI-powered data exploitation capabilities, the platform will also generate an After Action Review dashboard to help commanders identify weak points and improve their future performance.

Although, managing the AI tool to identify these weak points does occasionally pose challenges:

“So if you run the language model twice against the same data, it may well be that sometimes it will explain to you exactly what went wrong in the operation, and sometimes it will miss it entirely. So getting that right involves tuning the prompts to encourage [the AI tool] to look for different types of failures.”

However, Arthurs went on to suggest a change of perspective when it comes to managing our own understanding of AI performance.

“You then have to do the evaluation phase to prove that it is statistically likely to pull out the the right answers. But that testing evaluation piece, it isn’t going to be a binary, yes, it works. No, it doesn’t work as it could be with a piece of hardware, for example.

“It will be more likely [that] we can configure it such that it will work 99.9% of the time, but we understand and we all have to understand that it will not work 0.1% of the time, and you need to work out where that boundary is going to be.

“Our brains aren’t quite wired as a society to follow the full statistics, they will come to individual stories.”

Chris Arthurs, Hadean’s vice president of innovation.

“Whenever an autonomous vehicle crashes or drives into someone or makes a mistake and runs into the back of the truck that no human would ever have done. Because it would have said there’s a truck there, I’m not going to drive into the back. But that’s that, whether or not it does that is, to me personally, of very little interest.

“What I want to know is whether it is statistically fair enough not to crash where a human wouldn’t have. Statistically, how many crashes did it not avoid that humans would have would have caused? But our brains aren’t quite wired as a society to follow the full statistics, they will come to individual stories.”