In Dallas on Friday, July 8, police were confronted by a truly nightmarish situation — a sniper with military training, who killed five officers in cold blood as they helped keep order at a protest demonstration. After the assailant Micah Johnson took refuge on the second floor of a college building, he spent two hours taunting police negotiators, saying that he intended to take more lives and had planted bombs.
"He was in such a position that they could not see him," Dallas police chief David Brown explained in a CNN interview. "He was secreted around a brick corner." The only way to get a clear shot at Johnson "would be to expose officers to grave danger."
Brown told his SWAT team to use their creativity come up with a solution. Not long afterward, a Remotec Andros Mark VA-1 robot rolled into the area where Johnson was holed up. While such remote-controlled machines have become a go-to tool for bomb disposal, this time, the robot actually carried a one-pound explosive charge — which police then detonated to kill Johnson. (The robot itself survived the blast.)
While the improvised solution worked, it aroused plenty of controversy. The U.S. has employed weaponized drone aircraft to assassinate suspected terrorists overseas for years, and American soldiers used robots carrying mines to kill insurgents in Iraq.
"But this is the first time that deadly force has been used by a [police] robot," says Seth Stoughton, a police officer who is now an assistant professor at the University of South Carolina's School of Law.
Up to this point, Stoughton says, police had only used the machines to deliver nonlethal force —sending a robot up to a window or door to deliver chemical agents, for instance, in order to force suspects to come out and surrender. In 2013, police used a robot to pull away the tarp on the boat where convicted Boston Marathon bomber Dzhokar Tsarnaev was hiding.
Some might fear that deploying killer robots is the first step towards some sort of techno-totalitarian dystopia. But legal experts say that having robots with lethal capabilities actually is within the powers that police already have been given to use handguns, rifles or other weapons to take out people who threaten bodily harm to them or civilians.
"It was innovative to be sure, but I see nothing inappropriate about it," Patrick J. Solar, a former police chief and now assistant professor of criminal justice at the University of Wisconsin-Platteville, says via email. "As my training officer once told me, when the use of deadly force is justified, it really doesn't matter if you use a two-by-four or your 3000-lb. cruiser."
Stoughton, an expert on regulation of police, says that using robots doesn't alter the legal limits of police power to use lethal force, "but that it might change the underlying facts that we apply the rule to."
He cites a hypothetical scenario in which police have surrounded a suspect and set a perimeter around the hideout, complete with fortified barriers that bullets can't penetrate. If police have to shoot at the subject, they reasonably could assume that their lives would be in danger, and have legal justification to kill. If they can send in a robot instead, and not risk their own lives, that raises the question of whether the risk that justifies lethal force would still exist.
If the suspect poses a threat to civilians, though, it's a different matter, Stoughton explains. Say, for example, in the hypothetical scenario, the police robot enters the building and its video camera reveals to police that the suspect is pointing a rifle out a window.
"In that case, it may be entirely reasonable to infer that a suspect is aiming at someone," says Stoughton, "even if the officers might not be aware that the suspect has a particular target in mind."
Stoughton also says there's no constitutional requirement that police warn a suspect that a robot is armed with a lethal weapon, any more than they're required to issue a warning if they happen upon an armed suspect who is in the process of harming someone.
"It's a good practice, when possible, to do so, but a warning isn't always feasible," he says. "And the situation might be such that they could infer that a warning wouldn't be effective — say, for example, if a shooter has been shooting at people for 20 or 30 minutes."
It's also important to remember that the robots currently in use by police forces are remote-controlled machines, which can kill only if a human operator gives the command. While autonomous robots and artificial intelligence might make big inroads in other parts of society, neither Stoughton nor Solar expect to see a day in which police androids are patrolling the streets and confronting law breakers.
"So much of routine policing is redundant procedure and I can certainly see a role for automation, perhaps even artificial intelligence, in low-risk areas such as crime reporting and date collection," says Solar. "I don't see robots making arrests."
Meanwhile, other police departments also have equipped themselves with robots capable of delivering lethal force. In Cook County, Illinois, for example, the sheriff's office has two robots capable of firing a 12-gauge shotgun round. Brian White, the department's first deputy chief of police, told the Chicago Tribune that he wouldn't hesitate to order them to fire upon suspects if the situation warranted it.