Posts Tagged superinelligence

A Machine Smarter Than Me – Computers That Ignore

Posted by on Friday, 6 March, 2015
Robot's Revenge



Sooner or later, robots will win. They will get all the jobs.

What will be do for money? Will we get weekly checks? From whom?

I wouldn’t worry. Our ever so smart political leaders are probably working out the details.

Aren’t they?

We’ll get to all that in another post.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

Today’s article is about the possibility that robots, even the cute ones with big eyes, could muscle us out entirely. Take aways our jobs? Sure. But, even worse, they could take it all.

Robots one. Humans zero.

Stephen Hawking says we could screw the pooch because we didn’t think things through when we had the chance. He says robots could pass us right by in the brains department. Once they’re smarter than us, the ungrateful little clankers won’t mind chucking us into the excess baggage bin.


Bill Gates agrees with Hawking. Elon Musk agrees with both of them. Musk says artificial intelligence is “summoning the demon”. It’s potentially worse than nuclear weapons. Others who, supposedly, know what they’re talking about – experts in artificial intelligence and such – agree too.


MISTER Science AintSoBad thought he better look into this. So he read up on it – especially stuff by Nick Bostrom (Founding Director of the Future of Humanity Institute at Oxford Martin School). Bostrom, well respected and influential in neuroscience, technology, physics, and philosophy, has written a book. SuperIntelligence: Paths, Dangers, Strategies. Bostron’s book is serious and thoughtful.

Here’s the thing.

Bostron says we’re jaded.  There’s been so much crazy talk about computers taking over that we have tuned out.

I’m not sure.  He could be right. You don’t need proof that computers are getting smart, do you? Phones, robots, navigation systems, refrigerators, thermostats. It’s weird how they know what you’re thinking before you do.

They’re just contraptions. They don’t really think. That’s for sure.

For pretty sure, anyway.

Could they develop a “sense of self” and become conscious as these experts warn?

There’s room to worry because the way these things get programmed is changing. The traditional techniques have taken us a long way but AI (Artificial intelligence) researchers have caught on to the idea that you can’t program a machine to be be self aware. They’ve tried it and it hasn’t worked. If there’s any hope of truly cognitive machines, computers have to program themselves to get smarter.

That’s the corner that got turned.

That’s what scares the crap out of Hawking and Gates and Musk.

After figuring out that we don’t know what kind of instructions would get computers/robots over the consciousness hump, researchers are trying out new approaches – things that might  lead to consciousness. These systems include genetic algorithms, neural nets, support vector machines, decision trees, and naive Bays.

Bostron says we probably won’t know there’s been a breakthrough until it’s too late. Once computers get close to human intelligence, they aren’t likely to stay at that level very long. They will quickly pass us. The danger is that they might not turn out to be sentimental types. If they don’t see a benefit in serving the human race, they may change course and become a nuisance. Or even worse.

With computers and robots controlling so much of what we depend on, those mischievous little devils could be a very big problem. We need to figure out exactly what we need to include in those computers so that we are reasonably protected against an emerging consciousness. We need to understand our responsibilities as owners of sentient things, as well as how we can insure that those sentient things are happy to work in our (and their) mutual interest.

This is a major undertaking as it requires worldwide cooperation – something that we aren’t very good at.

MISTER ScienceAintSoBad suggests that we get on it.


– – – – –

The drawing is mine.