Friday, May 5, 2017

A good read on superhuman artificial intelligence

This essay written by Kevin Kelly must be the most sensible text on superhuman artificial intelligence (AI) and the allegedly imminent "singularity" that I have ever read.

Although it appears to get a bit defensive towards the end, I am in complete agreement with all main points. In my own words, and in no particular order, I would like to stress:

There is no evidence that AI research is even starting to show the kind of exponential progress that would be required for an "intelligence explosion".

There is no evidence that intelligence can be increased infinitely; in fact there are good reasons to assume that there are limits to such complexity. What is more, there will be trade-offs. To be superb in one area, an AI will have to be worse at something else, just like the fastest animal cannot at the same time be the most heavily armoured. Finally, we don't want a general purpose AI that could be called "superhuman" anyway, even if it were physically possible. We want the cripplingly over-specialised ones. That is what we are already doing today.

Minds are most likely substrate-dependent. I do not necessarily agree with those who argue that consciousness is possible only in an animal wetware-brain (not least because I am not sure that the concept of consciousness is well defined), but it seems reasonable to assume that an electronic computer would by necessity think differently than a human.

As for mind-uploading or high-speed brain simulation, Kelly points out something that I had not previously thought of myself, even when participating in relevant discussions. Simulations are caught in a trade-off between being fast because they leave lots of details out on one side, and being closer to reality but slower, because more factors have to be simulated. The point is, the only way to get the simulation of, say, a brain to be truly 1:1 correct is to simulate every little detail; but then - and this is the irony - the simulation must be slower and more inefficient than the real thing.

Now one of the first commenters under the piece asked how that can be true when emulators can simulate, 1:1, the operating system of computers from the 1980s, and obviously run the same programs much faster in that little sandbox. I think the error here is to think of the mind as a piece of software that can be copied, when really the mind is the process of the brain operating. Simulating all the molecules of the brain with 1:1 precision, and faster, on a system that consists of equivalent molecules following the same physical laws seems logically impossible.

Finally, one point that Kelly did not make concerns the idea that a superhuman AI could solve all our problems. He discussed that more than just fast or clever thinking is needed to make progress, experiments for example, and those cannot be sped up very much. But what I would like to add is that of our seemingly intractable problems the really important and global ones are political in nature. We already know the solutions, it is just that most people don't like them, so they don't get implemented. Superhuman AI would merely restate the blatantly obvious solutions that human scientists came up with in the 1980s or so, e.g. "reduce your resource consumption to sustainable levels" or perhaps "get the world population below three billion people and keep it there". And then what?

No comments:

Post a Comment