A Robot That Harms: When Machines Make Life Or Death Decisions | KUOW News and Information

A Robot That Harms: When Machines Make Life Or Death Decisions

Aug 29, 2016
Originally published on August 29, 2016 4:36 pm

Isaac Asimov inspired roboticists with his science fiction and especially his robot laws. The first one says:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Artist and roboticist Alexander Reben has designed a robot that purposefully defies that law.

"It hurts a person and it injures them," Reben says. His robot pricks fingers, hurting "in the most minimal way possible," he says.

And the robot's actions are unpredictable — but not random. "It makes a decision in a way that [I] as the creator cannot predict," Reben says. "When you put yourself near this robot, it will decide whether or not to hurt and injure you."

Though it may seem like a slightly silly experiment, Reben is making a serious point: He's trying to provoke discussion about a future where robots have the power to make choices about human life.

Reben's robot is not very elaborate. It's just a robotic arm on a platform, smaller than a human limb and shaped a bit like the arm on one of those excavators they use in construction — but instead of a shovel, the end has a pin. (And in case you were wondering, each needle is sterilized.)

"You put your hand near the robot and it senses you," Reben explains. "Then it goes through an algorithm to decide whether or not it's going to put the needle through your finger."

I put my finger beneath the arm. The waiting is the hardest part, as it swings past me several times. Then I feel a tiny sting when it finally decides to prick me.

Reben created this robot because the world is getting closer to a time when robots will make choices about when to harm a human being. Take self-driving cars. Ford Motors recently said it planned to mass produce autonomous cars within five years. This could mean that a self-driving vehicle may soon need to decide whether to crash the car into a tree and risk hurting the driver or hit a group of pedestrians.

"The answer might be that 'Well, these machines are going to make decisions so much better than us and it's not going to be a problem,' " Reben says. "They're going to be so much more ethical than a human could ever be."

But, he wonders, what about the people who get into those cars? "If you get into a car do you have the choice to not be ethical?"

And people want to have that choice. A recent poll by the MIT Media Lab found that half of the participants said they would be likely to buy a driverless car that put the highest protection on passenger safety. But only 19 percent said they'd buy a car programmed to save the most lives.

Asimov's fiction itself ponders a lot of the gray areas of his laws. There are a total of four — the fourth one was added later as the zeroth law:

0. A Robot may not harm humanity or by inaction allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In Asimov's stories, the laws are often challenged by the emotional complexities of human behavior. In a screenplay derived from his famous I, Robot, the protagonist is a detective who doesn't like robots because one had saved him in a car crash, but let the girl beside him die based on a statistical determination that she was less likely to survive.

Still, Asimov's laws are often cited by scientists in the field as a kind of inspiration and talking point as we move toward a world of increasingly sophisticated machines.

"The ability to even program these laws into a fictional robot is very difficult," Reben says, "and what they actually mean when you really try to analyze them is quite gray. It's a quite fuzzy area."

Reben says the point of making his robot was to create urgency — to put something in the world now, before machines have those powers in self-driving cars.

"If you see a video of a robot making someone bleed," he says, "all of a sudden it taps into this viral nature of things and now you really have to confront it."

Copyright 2016 NPR. To see more, visit http://www.npr.org/.

ARI SHAPIRO, HOST:

Here's a riddle for the digital age. Why did the scientist create a robot that hurts humans - answers on this week's All Tech Considered.

(SOUNDBITE OF MUSIC)

SHAPIRO: The science fiction author Isaac Asimov famously created the Laws of Robotics. The first one is that a robot should never harm a human being. But one artist-slash-roboticists has inverted that law to provoke discussion about a future where robots may have the power to make choices about human life. NPR's Laura Sydell reports.

LAURA SYDELL, BYLINE: MIT-trained roboticists and artist Alexander Reben admits his robot has no practical purpose. It's designed to prick human fingers.

ALEXANDER REBEN: It hurts a person obviously in the most minimal way with this needle. And it makes a decision in a way that me as the creator cannot predict. So the idea is that when you put yourself near this robot, it will decide whether or not to hurt and injure you. There's no human in the loop of this decision.

SYDELL: It's not a very elaborate robot, just a robotic arm on a platform. It's smaller than a human arm but shaped a little like the arm of one of those excavators they use for construction. Instead of a shovel on the end, there's a pin.

REBEN: And you put your hand near the robot. And it senses you. Then it goes through an algorithm to decide whether or not it's then going to put the needle through your finger.

SYDELL: I dared to put my finger beneath the arm. It swings past me several times and then - oh.

The waiting is the hardest part. And in case you were wondering, each needle is sterilized. Reben says the point of this robotic sculpture is to get people to think about a world in which programmed machines like self-driving cars make the decision to hurt a human.

For example, what would happen if a self-driving vehicle must decide whether to drive you into a tree or hit a group of pedestrians?

REBEN: The answer might even be that, well, these machines are going to make decisions so much better than us, and it's not going to be a problem. They're going to be so much more ethical than a human could ever be.

SYDELL: But what about the people who actually get into those cars?

REBEN: If you get into a car, do you have the choice to not be ethical (laughter)?

SYDELL: And people want to have that choice. A recent poll by MIT Media Lab found that half of the people in the survey would buy a driverless car that put the highest protection on passenger safety, but only 19 percent said they'd buy a car program to save the most lives.

The popular science fiction author Isaac Asimov inspired Reben's work. Asimov even made up laws for robots. One of his stories, "I, Robot," was made into a film starring Will Smith. Smith plays an emotional cop chasing a killer robot. Here's a scene where he's speaking with a roboticist played by Bridget Moynahan.

(SOUNDBITE OF FILM, "I, ROBOT")

BRIDGET MOYNAHAN: (As Susan Calvin) A robot cannot harm a human being, the first robotics.

WILL SMITH: (As Del Spooner) Yeah, I know. I've seen your commercials. But doesn't the second law state that a robot has to obey any order given by a human being? What if it was given an order to kill?

SYDELL: Or hurt someone. Asimov's laws for robots are often cited by scientists in the field as a kind of inspiration and talking point as we move towards a world of increasingly sophisticated machines. And Asimov's stories often show how no matter how hard humans try to program robots not to harm people, complicated situations arise.

REBEN: The ability to even program these fictional laws into a robot is very difficult. And what they actually mean when you really try to analyze them is quite gray. It's a quite fuzzy area.

SYDELL: For example, should a programmer design a robot that will never hurt a person even if doing that would save another life? Reben says the point of making his robot is to put something in the world now before machines have those powers in, say, self-driving cars.

REBEN: If you see a video of a robot making someone bleed, all of a sudden it taps into this viral nature of things. And now you really have to confront it. It's something different.

SYDELL: And you can go online and watch Reben's robot and ponder the power that our future overlords will have over us someday. Laura Sydell, NPR News. Transcript provided by NPR, Copyright NPR.

Tags: