Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI researcher says amoral robots pose a danger to humanity

Sharon Gaudin | March 10, 2014
With robots becoming increasingly powerful, intelligent and autonomous, a scientist at Rensselaer Polytechnic Institute says it's time to start making sure they know the difference between good and evil.

With robots becoming increasingly powerful and autonomous, RPI Professor Selmer Bringsjord says it's important that they know good from evil. These autonomous robots were part of a recent demonstration in Fort Benning, Ga. (The U.S. Army is looking at how robots can help soldiers in the field.)

Even when those needs are anticipated, any rules about right and wrong would have to be built into the machine's operating system so it would be more difficult for a user or hacker to over ride them and put the robot to ill usage.

Mark Bunger, a research director at Lux Research, said it's not crazy to think that robots without a sense of morality could cause a lot of trouble.

"This is a very immature field," said Bunger. "The whole field of ethics spends a lot of time on the conundrums, the trade-offs. Do you save your mother or a drowning girl? There's hundreds of years of philosophy looking at these questions.... We don't even know how to do it. Is there a way to do this in the operating system? Even getting robots to understand the context they're in, not to mention making a decision about it, is very difficult. How do we give a robot an understanding about what it's doing?"

Dan Olds, an analyst with The Gabriel Consulting Group, noted that robots will be the most useful to us when they can act on their own. However the more autonomous they are, the more they need to have a set of rules to guide their actions.

Part of the problem is that robots are advancing without nearly as much thought being given to their guiding principles.

"We want robots that can act on their own," said Olds. "As robots become part of our daily lives, they will have plenty of opportunities to crush and shred us. This may sound like some far off future event, but it's not as distant as some might think.

"We can't build an infant machine and let it grow up in a human environment so it can learn like a human child would learn," said Bringsjord. "We have to figure out the ethics and then figure out how to turn ethics into logical mathematical terms."

He also noted that robots need to be able to make decisions about what they should and shouldn't do — and make those decisions quickly.

Bringsjord noted, "You don't want a robot that never washes the darned dishes because it's standing there wondering if there's an ethical decision to be made."

 

Previous Page  1  2 

Sign up for MIS Asia eNewsletters.