If you're a fan of the "Terminator" movie franchise, you've already seen a frightening fictional glimpse of a nightmarish dystopian battlefield in which machines equipped with artificial intelligence have the unfettered power to kill humans.
While fully autonomous weapons — the technical term for killer robots — aren't quite yet a reality, the rapid advance of robotics and artificial intelligence raises the specter of armies someday soon having tanks and aircraft capable of attacking without a human at the controls.
According to a 2017 report by the Center for a New American Security, more than 30 countries either have or are developing armed drone aircraft. The Russian news agency TASS also reported in 2017 on that country's development of an automated "combat module" — a robotic tank — capable of utilizing artificial intelligence to identify targets and make decisions. And while current U.S. policy rules out fully autonomous weapons, the Pentagon is developing air and ground robots that would operate in swarms of 250 or more, performing surveillance and other functions to support human troops. And according to the South China Morning Post, China is working to develop submarines equipped with artificial intelligence that would help commanders in making tactical decisions.
The Future Is Rapidly Approaching
The rapid rush to automate warfare is alarming scientists, and across the globe, there's a growing movement to halt the development of autonomous weapons before the technology has a chance to proliferate. Close to 4,000 artificial intelligence and robotics researchers and scientists in other fields — including SpaceX and Tesla founder Elon Musk, Apple co-founder Steve Wozniak and the late astrophysicist Stephen Hawking — have signed an open letter to the world, urging a ban on "offensive autonomous weapons that are beyond meaningful human control." Organizations such as the Campaign to Stop Killer Robots have become increasingly vocal about the need for restrictions on such technology.
"We are only a few years away," Toby Walsh, the Scientia Professor of Artificial Intelligence at Australia's University of New South Wales, warns in an email. "Prototypes exist in every sphere of battle — in the air, on the ground, on the sea and under the sea."
Walsh got involved in the effort several years ago, when it became apparent to him that "an arms race to develop such weapons was starting, and we had an opportunity to prevent the dystopian future so often portrayed by Hollywood."
Walsh and other AI researchers recently used their prominence in the field to exert pressure. After KAIST (Korea Advanced Institute of Science and Technology), a South Korean research university, launched a new center devoted to the convergences of AI and national defense, they sent an open letter to KAIST president Sung-Chul Shin, threatening a boycott unless he provided assurances that the center would not develop fully autonomous weapons that lacked meaningful human control. (Sung subsequently issued a statement affirming that the university would not develop such weapons, according to Times Higher Education.)
The UN Initiative
The anti-killer robot movement is also keeping a close eye upon developments in Geneva, where representatives from various countries came together in April 2018 for a United Nations conference on what to do about autonomous weapons.
Richard Moyes, the managing director of Article 36, a United Kingdom-based arms control organization, says in an email that autonomous weapons could erode the legal framework that governs warfare, which is dependent upon humans making decisions about whether use of force is legal in a given situation. "If machines are given broad license to undertake attacks then those human legal assessments will no longer be based on a real understanding of the circumstances on the ground," writes Moyes, a 2017 recipient of the Nobel Peace Prize for his work on nuclear arms reduction. "This opens the way for a real dehumanisation of conflict."
The U.S. presumably would support a killer robot ban. In 2012, the Obama Administration issued a directive — which the Trump White House apparently has chosen to continue — requiring that autonomous weapons technology should be designed "to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." The directive also requires safeguards to protect against autonomous weapons malfunctioning and launching attacks on their own. Defense Advanced Research Projects Agency (DARPA) Director Steven Walker said in March that he doubted that the U.S. would ever allow machines to make decisions about using lethal force, according to The Hill.
In an email, DARPA spokesperson Jared Adams says that the agency's research instead focuses upon "investigating ways to ensure that technology improves human operators' ability to make rapid decisions at critical moments rather than to erode that ability." There's a worry that human operators' reliance upon automation could reduce their situational awareness when they need to call upon it — a problem Adams says is illustrated by the 2009 crash of Air France flight 447. "For this reason, DARPA's research related to autonomy seeks to find an optimal balance between various operating modes with an emphasis on providing maximum decision support to warfighters," he says.
No International Consensus
But outlawing killer robots internationally may prove difficult. Bonnie Docherty, senior arms researcher at Human Rights Watch and associate director of armed conflict and civilian protection at Harvard Law School's International Human Rights Clinic, says in an email that while most of the countries at the UN conference are concerned about autonomous weapons, there's not yet consensus support for a legally-binding international ban.
Would a ban on killer robots work? A longstanding international treaty banning chemical arms, for example, apparently hasn't stopped the use of such weapons in the Syrian civil war.
Nevertheless, Docherty argued that bans on chemical weapons, antipersonnel mines and cluster munitions still have saved lives. "Such laws bind countries that join them, and by stigmatizing problematic weapons can influence even countries that aren't party. Any law — even against a widely accepted crime like murder — can be violated by a rogue actor, but that does not mean such laws should not be adopted. Law still has a significant impact on conduct and a new international treaty should be adopted to preempt fully autonomous weapons," she writes.