Having perused the latest technologies and hearing the challenges the modern soldier faces today at the Future Soldier Technology conference, 6-8 March, there was a common thread that weaves the latest and greatest military systems together.
The soldier has come to rely on actionable insight that comes from the automated services of artificial intelligence (AI).
AI directs all manner of systems – from tracking the health of integrated devices like smart batteries being used by soldiers on the front line to enabling autonomous weapon systems, like loitering munitions to track, identify, and destroy targets without human operation.
In a panel discussion on The Role of AI and Robotics in Dismounted Soldier Systems at FST, there was a consensus that AI has yet to be codified by military organisations seeking to apply the innovation.
New regulatory frameworks must be conceived to meet and systemise the various uses of AI. Right now, however, we conceive of AI as having an indeterminable breadth.
Forces deciding to invest in AI to improve their defence capabilities should ascertain where their immediate requirements reside and whether their existing data collection solutions are adequate, GlobalData tells us in their report on AI in Defence (2021).
On the panel, Lieutenant Colonel William Jerrold, in Systems Acquisition for the Commando Force Programme, Royal Marines, made the point that AI provides “better manipulated data for things like exploiting the mass of information that’s out there, rather than being a burden on our commander”.
“So we can reorientate [raw data] so that it’s actually useful, looking for the abnormal or the absence of the normal, looking for patterns, supporting decision-making” Jerrold stated.
While AI is used in this way against adversaries, a major problem that militaries face with this is the amount of data that they accrue. Given the current pace of acceleration in the production of AI for military application, it has far outpaced the capacity of militaries to store and process the data collected.
This comes back to our need to codify AI data, as militaries lack a coherent system in which to categorise it. Questions are raised regarding the extent to which some data ought to be classified, accessible, or shared across domains.
The report also mentions that these growing data streams are outpacing military capacity to organise and store them while also generating so much noise, so much unusable intelligence, that analysts cannot sort the relevant information from the irrelevant.
This calls for a greater need to categorise AI processed intelligence. As the system and its technologies grow, so does the need for codification.
Calls for codification
While there is a need to regulate aspects of AI, we still face the problem of how we should approach codifying it.
Most recently, on 8 March, the UK Ministry of Defence (MoD) officials will give evidence to the UK House of Lords inquiry on lethal autonomous weapons and AI.
In their call for evidence, the House of Lords committee leading the inquiry has stated that “Generally, war is governed by procedures, rules and regulations, and the use of AWS is no exception”.
As AI becomes ubiquitous in defence research and development and the variety of potential applications increases, the challenge to codify it will only increase.
Continual investment in AI applications will only see quantity trump ability, as nations race to have the perception of AI power, as opposed to the practical organisation that allows for its effective deployment.
It is for this reason that GlobalData say that in the future, the only metric to consider should be the quality of a country’s research ecosystem and its ability to generate technological breakthroughs in the military space.