AI could lead to nuclear instability

JP Casey 24 April 2018 (Last Updated April 24th, 2018 17:23)

American thinktank the RAND Corporation has published a paper which suggests that artificial intelligence (AI) could encourage humans to take significant risks in regards to managing nuclear security, and could disrupt the basis of nuclear deterrence by 2040.

AI could lead to nuclear instability
The RAND Corporation has suggested that advances in sensor technologies could make pre-emptive strikes safer, and more enticing, to would-be nuclear aggressors. Credit: Wikimedia

American thinktank the RAND Corporation has published a paper which suggests that artificial intelligence (AI) could encourage humans to take significant risks in regards to managing nuclear security, and could disrupt the basis of nuclear deterrence by 2040.

The paper, ‘How Might Artificial Intelligence Affect the Risk of Nuclear War?’, argues that AI fundamentally removes the condition of mutually-assured destruction that kept the world from falling into nuclear war during the Cold War, and could undermine strategic stability.

“The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history,” said RAND associate policy researcher and co-author of the paper Edward Geist.

“Much of the early development of AI was done in support of military efforts or with military objectives in mind.”

Geist goes on to name the Survivable Adaptive Planning Experiment of the 1980s, which aimed to use AI to translate reconnaissance data into nuclear targeting plans, as one such example of AI being developed in a military context. The paper goes on to suggest that advances in sensor technologies could enable countries to destroy rival nations’ retaliatory forces such as submarines and mobile missiles, making a pre-emptive first strike safer for the attackers, and potentially more likely.

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said RAND associate engineer and co-author of the paper Andrew Lohn.

“There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”

The paper did suggest that improvements in AI could improve nuclear stability in the long-term by improving accuracy in intelligence collection and analysis, and reducing the likelihood of human decision-makers misinterpreting the actions of their rival powers.

The research is part of RAND’s Security 2040 initiative, which aims to consider the effects of political, technological, social and demographic trends on international security over the next 22 years. It was funded by gifts from RAND supports and income from operations.