Robots have more potential to “do wrong” than most people realize.
Scientists like Stephen Hawking have been warning robot makers lately (. Hawking (and others) don’t think robotocists fully realize that their robots could become more self aware, becoming unexpectedly conscious and unpredictable.
The concern is that they could turn against us, perhaps using the communications networks and the power grids to attack humanity.
Maybe robots will decide we’re in the way or – worse – that we humans are trying to enslave the robot race. It’s not hard to see how a robot might react to that one.
Experts acknowledge that this is theoretically possible but they say we have time. Most of them don’t think we’re anywhere close to self aware “bots”; some don’t think it’s even possible.
But MisterScienceAintSoBad wonders if self awareness is the wrong thing to be worrying about.
Who says robots have to be self aware to be nasty?
What do we know about the inner lives of tarantulas? Or snakes? Is there a “me” in a snake? Does a snake know itself when it looks in a mirror? In fact, why should recognizing yourself (self awareness) matter? Aren’t the most dangerous humans, the ones that are the least self aware? Does a snake have to know about itself to be dangerous?
Robots are way past the point where everything has to be hard coded. Robot designers, like designers of other advanced software based systems, are always going “Damn! I didn’t know it could do that!”
Google Now isn’t even close to conscious.
Both Google Now and Siri suck at facts like hungry babies. They gorge on facts. They get smarter every day.
So maybe we should be worrying about something else besides if robots can see themselves in a mirror. Maybe that’s missing the point. Maybe we should be worrying about autonomous robots– the kind that don’t need humans.
Autonomous robots certainly aren’t science fiction. Every day, more robots “cut the umbilical” or, as they like to say when there’s nobody around but other robots, “cut the imbecile”.
Just kidding about the imbecile thing (I think).
We have drones and Mars Rovers that work independently – just occasionally checking in to make sure the boss is around. If a Rhoomba rug cleaner bumps into a chair, it decides on its own which way to go. It doesn’t look at you for guidance. Will some future Rhoomba – one that’s just an ordinary robot without any self awareness features – decide it’s more logical to push the mess makers out the back door than to perpetually clean up after them?
Are Rhoomba’s designers sure?
What do you think?
– – – – –
The drawing is mine.