Like HowStuffWorks on Facebook!

How Children's Stories Could Be the Key to Creating Ethical Robots


To learn how to operate ethically, perhaps artificial intelligence will need to listen to stories and test out scenarios just like young humans. Ni Qin/Getty Images
To learn how to operate ethically, perhaps artificial intelligence will need to listen to stories and test out scenarios just like young humans. Ni Qin/Getty Images

In a 2014 Pew Research Center report, a majority of technology experts predicted that within a decade, robotics and artificial intelligence will be all around us, in both the workplace and our personal lives. And to some of us, that probably sounds pretty cool. After all, we've already got AI software agents such as Siri and Cortana popping up on our phones to answer our questions. Wouldn't it be great to have a robot so smart that it could help us with everyday chores and run errands?

But maybe it wouldn't be so cool, if that robot used artificial intelligence — essentially, the ability to perceive and process information to make decisions on what to do — to make decisions that, while seeming perfectly logical to a machine, might be bad or dangerous for humans.

Imagine, for example, that you're too ill to go and get your prescription filled at the pharmacy, so you send your personal robot to do it. But as it turns out, there's going to be a lengthy wait to get the prescription filled. So the robot, which has been trained to be as efficient as possible in performing tasks, decides that the best course of action is to climb over the counter, push the pharmacist out of the way so it can grab the drugs off the shelf, and then run out the door.

It's not that your robot is evil. It's just that the poor machine doesn't know any better, since it hasn't been trained to follow a system of ethics that would tell it that robbery isn't an acceptable way to accomplish the task to which you've assigned it.

That leads to a complicated dilemma. How do we teach our future robotic helpers to have a moral sense, and to weigh their actions in a way that's nuanced enough to prevent them from doing bad or dangerous things in our service? In a paper presented at a recent meeting of the Association for the Advancement of Artificial Intelligence, Georgia Tech School of Interactive Computing professor Mark Riedl and research scientist Brent Harrison  present an ingenious solution. Just as fairy tales and children's books once helped a young you grasp the difference between right and wrong, comprehending stories can also help robots to avoid bad or dangerous behavior.

Robots wouldn't necessarily sit around in a circle on little mats and have storytime, explains Riedl via email. Instead, they would work from a selection of stories that show different ways of accomplishing the same goal: for example, eating inside a fast-food restaurant, as opposed to getting your food in the drive-through line. Then they would run those stories though a software system called Quixote, which would try to generalize from the stories and build a procedure — that is, a general understanding of how the task is accomplished. Quixote then would run hundreds, or even thousands, of simulations, in which the robot's AI would receive a virtual reward each time that it recognizes some part of the procedure that Quixote has developed.

"Over time, [the robot] learns that it is rewarding to do certain things and not rewarding to do other things," says Riedl. "Because of the source of the knowledge — the stories — it learns to do things in a certain way that conform to the input stories, and by extension avoid actions that violate social conventions."

In the fast-food scenario, for example, the robot might learn that no matter how you order your burger, you have to wait in line and then pay for it before you can take a bite.

Of course, when you were a kid, you didn't just absorb moral lessons during storytime; afterward, you went outside and played knights and dragons, cops and robbers or other pretend games in which you experimented with acting out those moral concepts. Running numerous simulations enables robots to do the same thing.

"The goal is not to imitate human learning — computers and humans are very different things — and a metaphor only goes so far," Riedl says. "Even though computers and humans may not learn the same way, I do seek to instill AIs with the ability to understand humans, their societies, and their values. The pragmatic endpoint of human learning and machine learning can be the same."

Riedl cautions that even with such a system, robots — like children — would need some time and experience to develop a moral sense. "Physical robots often need to practice in the real world, and if our system were used in a robot, that might also be true," he says. "In that case, it would need to be accompanied by a human operator to make sure it does not harm, upset, or inconvenience others since it has not completed its learning and can, and will, still make mistakes."