I can kind of understand why Singularitarians believe what they believe, in the same sense that I can understand why people right after the discovery of radioactivity believed that in the near future everything would be nuclear, including the living room heating. When something is all the rage, and in the current age it is computers, critical thinking can be overwhelmed by unwarranted enthusiasm.
After all, every saturation curve starts out like an exponential curve, and thus the rapid early advances are often happily extrapolated into the future. But I would have thought that Harris, who is, after all, a scientist and otherwise a sceptic, would have been more critical of the idea of an intelligence explosion.
Just to pick out the most curious parts:
any future artificial general intelligence (AGI) will exceed human performance on every task for which it is considered a source of “intelligence” in the first place. Whether such a machine would necessarily be conscious is an open question. But conscious or not, an AGI might very well develop goals incompatible with our own.Yes on the first part, because as Harris points out our computers are already superhuman in the areas that they were specifically built for, e.g. speed of computation. I also appreciate that he scare-quotes intelligence, drawing attention to the fact that it is hardly ever properly defined in any way that is distinct from faster CPU with more memory.
But where does this "might very well develop goals" come from? This is one of the core assumptions of the Singularitarians who are afraid of evil AI. It seems to work like this: intelligence + magic happens = AI has goals that we didn't put in there. One can only assume that there is some reasoning by analogy happening here, as we humans are intelligent and have goals that are often conflicting with the goals of other humans. But it is no mystery where those goals come from. They are the product of our evolutionary history and clearly shaped by it.
Similarly, a super-AI would have two possible sources of its goals: either the programmer put them in there or they evolved. But even if the AI was evolved inside a computer, which seems like an oddly inefficient way of going about things anyway, then its goals will evolve to be survival in the terms of the simulation it evolved in. Exterminate All Humans is as unlikely to be the end product as Consume Asteroids in our case.
One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims and built a superhuman AGI that behaved exactly as intended. Such a machine would quickly free us from drudgery and even from the inconvenience of doing most intellectual work. What would follow under our current political order? There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth, and the rest would be free to starve.This is really good, because it describes a rather more realistic risk than Terminator 3: Rise of the Machines. However, one would have to weigh a few other thoughts against it. First, Harris may overestimate the durability of such an order; if millions are impoverished next to obscene riches, they will just trash the place, and thus such an economy wouldn't be stable. Another reason why it would fail is that there would be no profit to be made for the owners of these labour-saving devices if nobody earns any salary from which they could buy their products.
Which brings us to the final point: even today no more than a few percent of humanity are needed to produce all the real necessities (basically food, clothes and housing) for everybody else. But that has just freed many of us to produce unnecessary crap, to pursue science, to provide additional services from holidays to medicine, to care for each other, and so on. Ultimately this trend is most likely to continue.
Also, all this is assuming that we won't run into an energy crisis once fossil fuels are gone.
And what would the Russians or the Chinese do if they learned that some company in Silicon Valley was about to develop a superintelligent AGI? This machine would, by definition, be capable of waging war—terrestrial and cyber—with unprecedented power.This is, however, where things get really weird. To a certain extent I can get the cyber aspect of the supposed war, although cutting the cables would help. But terrestrial war with unprecedented power? Does playing a super-AI card suddenly make an additional twenty thousand tanks appear? This is not a board game we are talking about! Again the formula appears to be: intelligence + magic happens = strong military.
Imagine, for instance, that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT—but, because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set it humming for a week, and it would perform 20,000 years of human-level intellectual work.In this case, fast thinking + magic happens = scientific progress. Again, Harris is a scientist, so he must know what would happen in such a situation: the computer would develop hypotheses, then it would, you know, this being science, actually have to test them, and so it would turn to its builders to ask for the resources in access to equipment, money and labour to do so. And then these experiments would have to run in real time (although I will grant that the computer could do the data analysis and paper writing very quickly).
That is how science works, and that is how it has to work, because it deals with empirical matters. I strongly assume that Harris' own Ph.D. project did not merely involve him sitting in a room for a few years thinking abstract thoughts either, which is the equivalent of what the computer in his argument seems to be doing.
Finally, the part that I find most exasperating:
We confront problems—Alzheimer’s disease, climate change, economic instability—for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one.Here the idea is that friendly super-AI, if we could only build it, will swoop in and solve all our problems. Although tempered in the first sentence with a "could", the second makes it essentially a moral imperative to build the super-AI because it may be the only chance we have.
Now the first thing to note is that as an outspoken atheist I would have expected Harris to notice this kind of wish-thinking when he sees it. This is completely equivalent to a believer hoping that their god will solve their self-made problems for them.
But second, and perhaps more importantly, the most dangerous crises that we face are actually not hard to solve. We do not need super-intelligence to figure out solutions because the solutions are obvious and have been so for decades. Overpopulation: have less children. Pollution: stop consuming so much, and have less children. Climate change: phase out fossil fuels, and have less children. Economic instability: reintroduce the policies and regulatory regimes that provided stability in the 1950ies to 1970ies. Of course the wish-thinking demands that somebody magically come up with a solution that does not require us to sacrifice anything, but that would not appear to be very realistic. A super-AI would most likely think for a bit and then say, "have less children", etc.
We don't need more intelligence, we merely need collective and political will.
(By the way, since when did Alzheimer's, as terrible a disease as it is, count as a global crisis to be mentioned in the same breath as climate change?)
To see a more critical contribution, check out the one by Daniel Dennett. He points out that what we really want and need in computers are highly specialised and thus efficient idiots savants, not general intelligences with all the downsides that we humans can easily supply ourselves.
No comments:
Post a Comment