A UK House of Lords inquiry looking at autonomous weapons systems (AWS) held its first public evidence session in March, where three experts in International Humanitarian Law (IHL) answered questions around from the committee members on a potential definition for AWS to be used in future legislation.

It was not as straightforward as the committee members had expected. “I think we should at least tell our witnesses that we’ve got the message about having a broader look at this,” said Lord Brown at the end of two hours of questioning. The witnesses had expressed dissatisfaction with approaches to regulating autonomous weapon systems, if that regulation was not wide enough to cover other areas of the targeting cycle that incorporate artificial intelligence (AI). 

It is within the remit of the committee to look at AI as well AWS. While much of the interest in the inquiry has been on the lessons to be drawn from the latter – focusing on what the International Committee of the Red Cross (ICRC) describes as “a weapon system that selects and applies force to human targets without human intervention” – the full name of the body conducting the inquiry is the House of Lords Artificial Intelligence in Weapons Systems Committee

“I would presume from your own title, you have the mandate to think about things a little differently … to look more widely at the use of artificial intelligence across weapons systems and the targeting cycle,” said School of Law Professor Noam Lubell of the University of Essex. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Concerns about the use of lethal autonomous weapon systems have varied legal and technical bases. In the context where a human has decided to engage with an enemy, and where that attack causes excessive, incidental civilian harm, if the decision maker went through a proportionality assessment in good faith using the information available at the time, that attack could still be lawful, even following a change of circumstances that affected the end result disastrously. 

In contrast, an autonomous system guided by AI faces limits in its ability to ‘facilitate compliance’ with IHL obligations, according to Georgia Hinds, legal adviser to the ICRC, as much of the decision-making process conducted by an artificial intelligence is incapable of being expressed. The major legal concern Hinds describes is about “a reasoning process” that “can’t be outsourced.” 

The machine learning algorithms that an AI uses to operate are a black box that is unclear during the training process and opaque in implementation.

For Lubell, this legal challenge around balancing proportionality is premature: “The technology can’t do it. So clearly it would be unlawful to use it.” 

Technology exists to calculate the cost in terms of civilian harm, he says, and this is in active use already, but for the time being balancing collateral harm against direct military advantage is “something that we cannot imagine a machine doing.”

AI in precision weapons and collateral damage

What advantages can AI bring to both humanitarian and military goals? This is a key area of interest for the inquiry and finding an answer to it builds the case for a more inclusive set of policies toward AI in defence. One area where these goals align is in precision strikes, where there is scope for AI to reduce collateral damage with guided precision weapons while providing operators with distinct military advantages.

Northrop Grumman has manufactured a Precision Guidance Kit (PGK), a smart fuse with precision steering capabilities for artillery shells. Artillery projectiles are generally stabilised by their spin, keeping them on a singular course. The PGK sits in the nose of a shell and – based on a GPS location input by the operator before launch – controls the direction of the shell mid-flight by influencing the way the shell spins. The accuracy of the fires this system brings down can reduce the potential for collateral damage. 

As the PGK spins, it generates the power used to run the device through an alternator. With this method of generating power, there is no need for a battery, giving the PGK a shelf-life beyond the limitations of a degradable power source. During flight, when it is time to correct the shell’s course, the alternator is used as a motor to stop the fins and control the direction.

The use of drones for combat purposes in the military is rapidly increasing. Credit: Shutterstock.
The use of drones for combat purposes in the military is rapidly increasing. Credit: Shutterstock.

The innovation Northrop Grumman made was to find a balance point wherein altering the spin would apply enough force on the projectile to change its course without adding additional drag features that would degrade its range or performance. The design is partly an optimisation problem. There need to be fins large enough to control the movement, but small enough to prevent excess drag. 

While a PGK equipped shell impacts precisely where it is wanted to land by using only one or two shells, destroying a specific target without the guidance system from Northrop Grumman requires multiple smaller rounds, and this increases the expected collateral damage from natural dispersion. 

As well as this humanitarian benefit, needing fewer shells can mean a reduction in cost and logistical footprint. Firing fewer shells also gives a tactical advantage on a modern battlefield, where opposing artillery sections fire back and forth. Counter-fire is very effective, and artillery equipped with PGK can fire, get first fire effects, and move before the opponent can shell that location. 

Additionally, the smart fuse has a unique safety feature included for the US Army, for use in high density areas where the operator is particularly concerned about collateral damage. When the operator programs the GPS coordinates, the fuse can be set to disarm if it misses the target by more than 150m, so that the round will not detonate if it veers away from the intended location. 

“Now, you carry out a strike because you feel you’ve got a precision weapon, and there is some collateral damage – albeit lawful. But had you not had that weapon, you wouldn’t have carried out the strike at all.” 

Prof. Noam Lubell, School of Law, University of Essex.

As an illustration of the increase in precision that comes using a PGK, Dave Belasco, a design engineer at Northrop Grumman described a case where an operator might want to hit the goal line of a soccer pitch from an operating distance that is common to the requirements of an artillery barrage in the field, 20km from the target. With the PGK smart fuse, the operator would cover its target and additionally cause a kinetic effect that destroys the whole goal area. In contrast to this, a conventional barrage without the PGK would destroy the target, the football pitch, and the spectator stands surrounding the field.  

Lubell points out that while precision weapons can be said to be more accurate, this may lead an operator to carry out attacks in populated areas with a guided munition that might not have been conducted using regular munitions. “Now, you carry out a strike because you feel you’ve got a precision weapon, and there is some collateral damage – albeit lawful. But had you not had that weapon, you wouldn’t have carried out the strike at all.”

“You could end up targeting something that beforehand you wouldn’t have targeted because there would be too much collateral harm, but that you can target with a small amount of collateral harm.

“Therefore, for those people that were in that small amount of collateral harm: they’ve actually now suffered when beforehand they wouldn’t have been targeted at all because we didn’t have anything accurate enough.” 

Unpacking arguments: laws in need of meaning

Where autonomous weapons do not have a specific target, and instead only have a generalised target profile or category of object, references to precision and accuracy benefits are, by definition, misplaced, argues Hinds, as “the user actually isn’t choosing a specific target… I think we should be careful, confusing those kinds of precision systems [with] an autonomous weapon system, … it could incorporate a landmine which is definitely not a more precise weapon.”

“But then even more broadly, IHL assessments, fundamental assessments such as distinction, proportionality, they rely very much on value-judgment, and context. 

“So, when you recognise someone is surrendering, when you have to calculate proportionality, it’s not a numbers game. It’s about what is the military advantage anticipated. Algorithms are not good at evaluating context. They’re not good at rapidly changing circumstances, and they can be quite brittle.”

National legislators are trying to determine how and at what scale AI is able to be integrated into the uncrewed kill chain. Credit: Shutterstock
National legislators are trying to determine how and at what scale AI is able to be integrated into the uncrewed kill chain. Credit: Shutterstock

Because the future military is one predicted to have pervasive AI in every aspect, the focus on its application in the targeting cycle goes beyond the use of AWS, argued Daragh Murray, senior lecturer at Queen Mary University London School of Law. 

Where AI is used in an intelligence, surveillance, and reconnaissance capacity to identify and select a target for use either by regular forces or a person-in-the-loop at a later stages, the level of trust that an operator puts into the algorithmically discerned course of action is an area of concern where real answers need to be found, and a set of standards and rules developed. 

“That doesn’t necessarily require new law, but it requires actually a lot more work ultimately. Just a few weeks ago, a statement by the UK, the US and others included draft articles on this subject. Now let’s say that became the law, because, if you had a new [legal] instrument, the phrasing would be similar,” said Lubell. 

“If we want to try and make progress, we need to unpack these rules. The rules are very general.”

Professor Noam Lubell, School of Law, University of Essex.

“It ends up using phrases like ‘context appropriate human involvement’… I think that is the correct phrase, but what does that mean? I can lay out twenty different contexts and in each one appropriate human involvement might mean something else. If we want to try and make progress, we need to unpack these rules. The rules are very general.”

Continuing, Lubell said that as once applications of rules are tried, they break down, leading to the requirement of a different legislative understanding. 

“Now, if we had a new instrument, it wouldn’t spell all of that out. It would still stay at a relatively generalised level, of what the law is in terms of whether human involvement is needed or not. But we’d still need, ultimately, the guidance on what that means.

“I think the high-level type of things that we end up with in new law, I think we can find them in existing IHL. What I would urge us to do is to spend more time on unpacking this and how these things would be used in practice in the real world.”