Reading once more a discussion of the singularitarian movement Less Wrong, it occurs to me that the principal mistake of singularitarians is not actually their belief in accelerating and unbounded progress. Yes, they are wrong about that too, but the core mistake is this:
They believe that intelligence is a kind of magic pixie dust that enables the being exhibiting that intelligence to achieve, well, pretty much everything it can imagine.
That is really at the core of their fear of hostile artificial super-intelligence, and of their hope for the fruits of building a friendly artificial super-intelligence. They believe that if somebody builds a sufficiently clever supercomputer then this supercomputer can achieve anything. Immortality. Space flight. Free energy. Exterminating all of humanity. Feeding all of humanity. And a pony.
A simple thought experiment should set this straight. Imagine a small island in the middle of the ocean. It is just a bare rock, without any plants, animals, iron ore, coal or whatever resources beyond rock. Now plop down on this island the superest super-intelligence you can just about imagine, and imagine that it wants to leave the island.
Will it succeed? Well no. How could it? There are no resources whatsoever, and rocks don't swim.
The same principle applies if we swap the island for our planet. It is well possible that no matter how super-intelligent a friendly intelligence is, there will still not be enough resources on this planet to work out a way to provide eight billion humans with a lifestyle that is both comfortable and sustainable.
It is well possible that it is quite simply physically impossible for a fragile biological organism to fly to a distant star and survive the journey, full stop, and that even a super-intelligence could only concede that fact.
It is well possible that immortality, even as "brain-uploading", is an unachievable dream, and that even a
super-intelligence could only concede that fact.
It is well possible that fusion power cannot be produced economically outside of a star, and that even a
super-intelligence could only concede that fact.
And it is actually pretty likely that an evil artificial super-intelligence could be stopped in its tracks by taking an axe to its power supply, just like the most intelligent human could be knocked out or shot by one of the stupidest.
Because intelligence is not magic pixie dust. It is perhaps best defined as the capability to solve problems efficiently, but it cannot solve unsolvable problems, and it does not somehow make the laws of physics go away.
No comments:
Post a Comment