Last week, a video of a four-legged robot went viral on the Chinese social media site Weibo. Worryingly, the video shows a dog-like robot with an AI-automated machine gun attached to its back. This has urged leading robotics companies to release an open letter condemning the weaponization of robots.
So, should we be worried about robots with guns?
Four-legged dog-like robots, also known as ‘robodogs’, have been around for a while. A few years ago, Boston Dynamics’ robodog Spot went viral in a video showing its cutesy dance routine. According to the company, Spot is a general-purpose robot intended “to automate inspection tasks and data capture safely, accurately, and frequently”.
But times have changed very quickly. The robotics industry is a fast-growing tech sector that will be worth over $586 billion by 2030, according to GlobalData forecasts. Recent advancements in artificial intelligence (AI) have allowed the development of intelligent industrial robots that can adapt to surrounding environments and have the mobility to move in difficult terrain, thus creating the perfect opportunity for weaponization.
Boston Dynamics, alongside five other leading robotics companies, have released an open letter pledging that they will not allow their robots or software to be weaponized, further stating, “We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues.”
The question remains, do these robotics companies have the power and influence to prevent the weaponization of robots? Unfortunately, no.
In 2021, the US Department of Defense (DoD) allocated $7.5 billion for the development and procurement of unmanned robotics systems and related technologies. This indicates that the world’s most powerful military will use upcoming technological developments to improve its military operations despite the obvious ethical implications.
On one hand, as wars become progressively higher
–tech, states should be more capable of fighting accurate wars that spare the lives of soldiers and civilians.
However, that is not necessarily the case. New hands-off technologies may promote more aggressive operations, which could result in fewer short-term military casualties, but higher civilian casualties and higher infrastructural damage, leading to graver socioeconomic consequences over time.
From the adoption of the machine gun to the nuclear bomb, the history of war suggests that militaries will not refrain from using any technological advancement in violent conflict. After all, even the bravest and most willing armed forces cannot compete with the latest cutting-edge technological advantages. Nevertheless, there are serious ethical implications for using technological innovations in war. For example, the use of drones has been continuously criticized by humanitarian organizations as it creates a grey area around issues of accountability. According to Amnesty International, leaked Pentagon files show that in 2013, 90% of those killed by US drone strikes in Operation Haymaker in north-east Afghanistan were unintended targets. Future robotics advancements like Boston Dynamics’ human-like robot Atlas will only further aggravate these concerns around accountability.
From military to civilian use
It is undeniable that militaries will use AI-weaponized robots in war, however, the commercialization and civilian use of these robots are even more concerning. Currently, there is a worldwide absence of robust regulation around the use of robotics systems for civilian and commercial use. This is because, until recently, mobile robots were not available to the general consumer. But today, robots, like the one seen in the Weibo video, and the AI software used to operate them are available for commercial purchase.
Another concern is civilian police using these robots. In the past, French police used Boston Dynamics’ Spot during raids as a data collection device, and there is no question that AI and other emerging technologies can positively enhance the human day-to-day experience. However, we must remain proactive and knowledgeable about how these technologies are being applied. Additionally, we must pressure policymakers to regulate the application and use of these technologies before it is too late.
What is preventing a malicious actor from attaching a gun to an AI robot? At the moment, very little. That is why regulation is needed and needed fast. While regulation will not prevent organized crime from using these technologies, it can restrict commercial access and introduce a ‘safety by design’ approach where robotics companies are required to adhere to safety features as an integral component during the development of AI software for robotics systems.
Ultimately, with or without regulation, we are worryingly close to a future where wars are fought by Terminators and crimes are policed by RoboCop!