Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Are we safe from self-aware robots?

Evan Schuman | Aug. 14, 2015
A breakthrough in A.I. has been reported that suddenly makes all of those apocalyptic predictions about killer robots seem less crazy.

ex machina
Credit: DNA Films © 2015

End-of-mankind predictions about artificial intelligence, which have issued from some of today's most impressive human intellects, including Stephen HawkingElon Musk, Bill GatesSteve Wozniak and other notables, have generally sounded overly alarmist to me, exhibiting a bit more fear-of-the-unknown than I would have expected from such eminences, especially the scientists. But that was before I saw reports on the self-aware robot.

The reports, such as this one from New Scientist, tell of a breakthrough in artificial intelligence. A robot was able to figure out a complex puzzle that required it to recognize its own voice and to extrapolate the implications of that realization. (Shorthand version: Three robots were told that two of them had been silenced and they needed to determine which one had not been. All three robots tried saying "I don't know," but only one could vocalize. Once that robot heard the sound of its own voice saying, "I don't know," it changed its answer and said that it was the one robot that had not been silenced.)

What's noteworthy is that this same test had been given to these same robots many times before, and this was the first time that one of these self-learning robots figured it out. And it's that figuring-it-out part -- more than the self-awareness itself -- that is troubling.

The classic argument against the robot takeover of the world is that while computers can go haywire -- think the Windows operating system on almost any given day -- so can humans. That's undeniable, but society has established some extensive checks-and-balances that limit how much damage any one person can do. The military has a chain of command, and killers on a shooting spree are eventually stopped, either by the police or bystanders. Consider 9/11. Although terrorists flying planes into buildings was unexpected, as soon as the nature of the attack became apparent, all U.S. aircraft was grounded.

But our reliance on computers to assist us and even take control just keeps increasing, and today machine intelligence is integral to military weapons systems, nuclear power plants, traffic signals, wireless-equipped cars, aircraft and more. One of our greatest fears now is that terrorists will gain control over any such key computer systems. But an even greater threat might be that the machines themselves gain the upper hand through artificial intelligence and wrest control from us.

It's become something of a classic science-fiction storyline: The systems calculate that they need to take a different path than we humans have envisioned. Consider this passage from that New Scientist story: "The test also shines light on what it means for humans to be conscious. What robots can never have, which humans have is phenomenological consciousness: the first-hand experience of conscious thought,' as Justin Hart of the University of British Columbia in Vancouver, Canada, puts it. It represents the subtle difference between actually experiencing a sunrise and merely having visual cortex neurons firing in a way that represents a sunrise. Without it, robots are mere philosophical zombies,' capable of emulating consciousness but never truly possessing it."


1  2  Next Page 

Sign up for MIS Asia eNewsletters.