KTEP - El Paso, Texas

When Robots Can Kill, It's Unclear Who Will Be To Blame

Mar 21, 2014
Originally published on March 21, 2014 12:42 pm

The fast-advancing field of robotics is opening up serious questions about the military-based motivations behind some of the coolest tricks our machines can now be programmed to perform.

The Defense Advanced Research Projects Agency, DARPA, helped create the Internet. But these days, DARPA is probably best known for its robotics contests. Its latest robotics challenge was inspired by the Fukushima nuclear disaster, which happened three years ago.

Back then, nuclear engineers rushed to shut down reactors at the Fukushima Daiichi nuclear power plant, but fear of radiation poisoning kept utility workers at the plant from shutting off the effectively cooling reactors sooner. Eventually, three of the plant's six reactors melted down.

"There is good evidence that if we had been able to send in some kind of robot and had that robot do relatively simple things, simple manual tasks like opening valves, opening doors, getting to control panels, a lot of the following disaster could have been averted, " says Brian Gerkey, of the Open Source Robotics Foundation.

The goal now is to build that robot — one that can open doors, move debris, turn a valve, even drive a conventional car.

In December, 16 teams of roboticists converged in Miami to compete. While the robots moved slowly and some were tripped up by seemingly trivial obstacles, the event pushed humanoid robots to do things they have never done before.

This may seem like an entirely altruistic enterprise — designing a robot for disaster response — but the event also is pushing the robotics field toward goals military planners have long sought.

"At the end of the day people need to remember what the D in DARPA stands for. It stands for Defense," says Peter Singer. Singer is a senior fellow at the Brookings Institution and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century.

"Too often scientists try and kid themselves," he says. "[They] act like just because I work on this system that is not directly a weapon system I have nothing to do with war."

Singer recalls speaking to one researcher recently who was working on a project funded by the Navy.

"He was working on a Navy contract on a robot that would play baseball. 'I don't have anything to do with war.' Come on. You think the Navy is fundng this because they want a better Naval Academy baseball team?"

Singer asks, for example, how tracking and intercepting a fly ball could be analogous to tracking and intercepting a missile.

It's hard to find a roboticist working today in academia who hasn't taken some kind of military funding. Illah Nourbakhsh is one of the few. While Nourbakhsh acknowledges the good that could come out of DARPA's recent push to build a semi-autonomous search and rescue robot, he also sees an obvious dual use.

If researchers set out to build a robot that can drive a regular car, climb a ladder and operate a jack hammer, "That means that that robot can manipulate an AK-47. That means that robot can manipulate the controls of all the conventional military machines as well," he says.

Nourbakhsh thinks DARPA is pushing roboticists to build machines that can make complex decisions quickly and independently.

"We are making our robots ever more autonomous," he says.

This research, Nourbakhsh says, is pushing us closer to the point where robots will decide when to kill. "It's a really interesting boundary to cross," he says.

Imagine using image recognition when a drone is flying in the air and matching faces against faces on a kill list, he suggests. If a robot like that made a mistake, who would be responsible? The programmer? The manufacturer? The military commander who launched it on its mission?

"It forces us to confront whether we really control machines," says Ryan Calo, a law professor at the University of Washington. Calo says these tensions won't just play out in the military, but will crop up whenever we are tempted to allow robots to make decisions on their own.

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

LINDA WERTHEIMER, HOST:

The Pentagon's research arm has been involved in the development of new technology, from the birth of the Internet to self-driving cars, to the design and deployment of robots. But some researchers worry about the Pentagon's priorities. They're concerned it may be heading toward the creation of autonomous robots designed to kill. NPR's Steve Henn reports.

STEVEN HENN, BYLINE: These days, the Defense Advanced Research Project Agency, or DARPA, is probably best known for its robotics contests.

UNIDENTIFIED MAN: And good morning, everyone, and welcome to the 2013 DARPA Robotics Challenge...

HENN: Ten years ago, a similar contest helped spur the creation of self-driving cars. DARPA's latest robotics challenge was inspired by the Fukushima nuclear disaster. Fear of radiation poisoning kept utility workers at the Fukushima plant from shutting off and effectively cooling reactors faster. Ultimately, three of the plant's six reactors melted down.

BRIAN GERKY: And there is good evidence that if we had been able to send in some kind of robot and had that robot do relatively simple things, simple manual tasks like opening valves, opening doors, getting to control panels, that a lot of the following disaster could have been averted.

HENN: So Brian Gerky at the Open Source Robotics Foundation says DARPA's challenge, its latest challenge, is to build that robot, one that can open doors, move debris, turn a valve, even climb into and drive a conventional car.

UNIDENTIFIED MAN: (Unintelligible) out of the driver's seat (unintelligible) significant strength and dexterity - that really is a challenge for the robots.

HENN: This past December 16, teams of roboticists from all over the world converged on a Miami speedway to compete. And while this may seem like an entirely altruistic enterprise, designing a robot for disaster response, it's not.

PETER SINGER: At the end of the day people need to remember what the D in DARPA stands for. It stands for defense.

HENN: Peter Singer is a senior fellow at the Brookings Institution and author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century."

SINGER: Too often scientists try and kid themselves, act like, well, just because I'm working on this system that's not directly a weapons system, I have nothing to do with war. I remember speaking with a scientist who was funded by a Navy contract and he was working on a robot that would play baseball.

And he said, I don't have anything to do with war. I was like, come on, you know, you think the Navy is funding this because they, you know, want a better Naval Academy baseball team?

HENN: Or is it that tracking and intercepting a fly ball is analogous to tracking a missile? It's actually hard to find a roboticist who hasn't taken some kind of military funding. Illah Nourbakhsh is one of the few. And while Nourbakhsh acknowledges that good could come out of DARPA's push to build a search and rescue robot, he also sees an obvious dual use.

If you set out to build a robot that can drive a regular car, climb a ladder and operate a jack hammer...

ILLAH NOURBAKHSH: That means that that robot can manipulate an AK-47. That means that that robot can manipulate the controls of all the conventional military machines that we have as well.

HENN: Nourbakhsh believes DARPA is pushing roboticists to build machines that can make complex decisions quickly and independently.

NOURBAKHSH: And it's a really interesting boundary to cross.

HENN: Imagine, he says, using...

NOURBAKHSH: Image recognition, where the drone is flying in the air looking down, recognizing people's faces, matching them against a database of known faces on a kill list and then deciding on its own, autonomously, whether it's going to shoot to kill or not.

HENN: Already nations around the world are experimenting with loitering munitions. These are robots that hover over an area until they independently recognize the target they were assigned to destroy. But if a robot like that makes a mistake, who would be responsible? The programmer? The manufacturer? The military commander who launched it on its mission?

RYAN CALO: It forces us to confront whether we really control machines.

HENN: Ryan Calo is a law professor at the University of Washington. He says these kinds of tensions aren't just going to play out in the military, but are sure to crop up whenever we're tempted to allow robots to make complex decisions on their own. Steve Henn, NPR News, Silicon Valley. Transcript provided by NPR, Copyright NPR.