It’s wrong to assume that AI in most of its forms displays its own deductions. AI algorithms display nothing more than those its creator intended, meaning that among the numerous successful analyses, the flaws of a lack of human ethics and judgement equally permeate AI. The extent of these problems became a hotly-discussed topic recently, with several prominent examples having come to light.

These issues ultimately culminate in a problem of trust around AI. If systems are only as good as their maker, then how can we be sure the results and decisions reached are not tainted by a potential initial flaw in the inventor? As with every industry, these questions are of paramount importance for defence and the use of AI in training simulations.

Trust in defence training AI

The high complexity and fidelity of simulations today undoubtedly provides a key advantage in training and planning exercises. However, if the AI used is unsound, then this could lead to several problems. Personnel may be training within synthetic environments where enemy units behave inaccurately compared with how they would in the real world. Or wargames that provide reactionary AI tactics for commanding officers may lead to future decisions being poorly informed.

These issues drive a lack of trust in AI within these simulations, particularly for NATO. NATO’s steady approach to its use of AI has led it to often be accused of lagging behind in this area, despite historically driving technological innovation.

These thoughts were echoed by Jens Stoltenberg, NATO’s Secretary-General, who told the Financial Times: “For decades, NATO allies have been leading when it comes to technology, but that’s not obvious anymore. We see China especially investing heavily in new, disruptive technologies like artificial intelligence, autonomous systems and big data.”

Latency risk

Latency in the adoption of artificial intelligence also raises the problem of who may exclusively use it. While trusting AI in defence is difficult, without a concerted effort to improve its use, you risk leaving this as a privilege for other militaries. With the speed of action being imperative in defence today, the particular advantages that AI offers in this realm are far too beneficial to ignore.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Speed has been the focus of much of the modernisation of militaries, with technology both alleviating arduous tasks and providing rapid data analysis. At a command level, if an optimum strategy can be derived quicker, it may offer benefits such as reducing the number of casualties in time-sensitive scenarios.

At a lower level, the increasing speed of deployment and action among units has similar beneficial outcomes. In several isolated instances, AI and autonomous systems have already been used to speed up command and control functions to great effect. But in today’s world, multi-domain operations dominate defence training, meaning AI needs to be ready to integrate with this new form of warfare.

 

Caption: Multi-domain operations dominate defence training. Credit: Haden

Multi-domain operations have stretched far beyond the historical air-land-sea and now include cyber, space and psychological factors. It has led to an irregular and unpredictable form of warfare as it has grown in its complexity, often making the correct course of action difficult to determine.

Speed in analysis, therefore, has become essential to navigating these new waters and regaining the strategic upper hand. Data that can inform autonomous systems in an interoperational manner is the key to this, but this has its own difficulties relating to its processing.

These new domains changed the task of interpretation that ultimately presents an entirely new challenge. Data is generated through an increasing number of disparate sources. Furthermore, the variation in the type of data is also increasing. This is presenting a computational challenge that legacy IT platforms struggle to deal with.

Simulating complexity of warfare

Without a sufficient model, there is no reliable way to reveal unexpected circumstances in the complexity of modern warfare. In contrast, advancement in distributed technologies offers a solution. Batches of simulations can be run in parallel to speed up analysis.

Combine this with high fidelity environments and real-time data streaming, simulations can be adapted and up to date insight can be derived. With the cloud offering a platform to translate all this information, the problem remains of how we ensure trust in the analysis our AI performs.

Achieving this starts at the core of the creation process, where the correct culture and operating structure nurtures the correct approach to creating AI. Trustworthy AI begins with designers that have the correct ethics, targets and responsibilities in mind.

Workshops centred around these goals help eliminate bad habits, identify negative feedback loops and propagate discussions on key data sets and ideas that help build a more trustworthy AI. They encourage questions both about the purpose of the AI but also about its other potential effects that should be considered.

Defence organisations can also make use of industry tools designed for eradicating common mistakes in AI development, which are increasing in number. On top of this, building the correct frameworks into your organisation itself for managing AI development will also lead it to be more trustworthy.

Creating roles that manage and critique how AI is created can achieve this by continuously testing and measuring its activity. Attainment of trust in AI in defence simulations will lead to modernised militaries that can prepare for every scenario in a complex battlespace.