Monday, April 19, 2010

On the Coming Singularity

Much has been said in the last few years about an approaching technological Singularity, beyond which humans or humans' descendants will be so far beyond anything we understand today that comparisons would be meaningless. I do not believe that the Singularity is imminent.

What do people mean when they speak of the Singularity? There are a variety of such notions, but most versions of the Singularity focus on self-improving artificial intelligences. The central idea is that humans will not only construct functioning artificial intelligences, but that such AIs will be smarter than humans. Given such entities, technological progress will increase rapidly as the AIs make discoveries and inventions that humans would not. This effect will be self-reinforcing as each successive improvement makes the AIs smarter. There are variations of this idea: Other Singularity proponents, generally described as Transhumanists emphasize genetic engineering of humans or emphasize direct interfaces between the human brain and computers. I am skeptical of a Singularity occurring in the near future.

Certainly Singularitarism is seductive. Variations of it make for great science fiction (Charlie Stross' Eschaton is an excellent example) and some version of the Singularity, especially those that involve humans being downloaded into immortal computers or the like, are appealing. Singularitarism may sometimes border on a religion, but it has the virtue of a minimally plausible eschatology, one that doesn't require the intervention of tribal deities, just optimistic estimates for technological and scientific progress. And to be sure, there are some very smart people such as Eliezer Yudkowsky who take the Singularity very seriously.

The most common criticism of Singularitarism is that we will not develop effective AIs. This argument is unpersuasive. There's no intrinsic physical law against developing AIs; we are making slow but steady progress; and we know that intelligences are already possible under the laws of the universe.We're an example.

While I reject most of the common criticisms of a coming Singularity, I am nevertheless skeptical of the idea for two reasons. First, while human understanding of science and technology has been improving over the last few hundred years, the level of resources it takes today to produce the same increase in understanding has increased dramatically. For example, in the mid 19th century a few scientists could work out major theories about nature, such as the basics of evolution and electromagnetism. Now, however, most major scientific fields have thousands of people working in them, and yet the progress is slow and incremental. There seems to be a meta-pattern that as we learn more we require correspondingly more resources to make corresponding levels of progress. Thus, even if we develop smart AIs, they may not lead to sudden technological progress.

Second, we may simply be close to optimizing our understanding of the laws of physics for technological purposes. Many of the technologies we hope to develop may be intrinsically impractical or outright impossible. There may be no room-temperature superconductors. There may be no way to make a practical fusion reactor. As Matt Springer suggested (here and here), we might activate our supersmart AI, and then it may say "You guys seem to have thought things through pretty well. I don't have much to add." This seems to be a common problem with Singularity proponents. It is a common argument by Singularitarians that essentially all challenges can be solved by sufficient intelligence. I've personally seen this argument made multiple times by Singularitarians discussing faster-than-light travel. But if it isn't allowed by the laws of physics than there's nothing we can do. If in a chess game white can force a checkmate in 3 moves, it doesn't matter how smart black is. They'll still lose. No matter how smart we are, if the laws of physics don’t allow something then we won’t be able to do that thing, any more than black will be able to prevent a checkmate by white.

There's a third problem with Singularitarism beyond issues of plausibility: It doesn't tell us what to do today. Even if no one had ever come with the Singularity, we'd still be investigating AI, brain-computer interfaces, and genetic engineering. They are all interesting technologies with potentially have major applications to help us answer fundamental questions about human nature. So in that regard, the Singularity as a concept is unhelpful: It might happen. It might not happen. But it tells us very little about what we should do now.

20 comments:

summortus said...
This comment has been removed by the author.
summortus said...

I am very interested in the final argument of the critique, insofar as it is interesting to compare the belief in the singularity with William James's "meliorism" (a very limited pragmatic religious faith--believing the universe has a purpose and has some sort of ill-defined order to it because the difference that such a belief makes is (James thinks) reducible to the motivation to act in ways that are good.

While I'd imagine you are not inclined to be sympathetic to pragmatist defenses of religious faith--which I've summarized, I must stress, rather weakly--I do wonder whether you think such defenses are adoptable for the singularity (i.e. better to believe that technological progress ends, Asimov style, in the reversal of entropy (let there be light) than in, say, nuclear annihilation.

Anonymous said...

http://www.lef.org/magazine/mag2005/sep2005_report_kurzweil_01.htm

It tells some to take lots of vitamins.

Khagan Din said...

It's an interesting post. I agree with Josh that intelligence may not be of unlimited use if the laws of physics have firm limits, but I would like to see a more in-depth defense of the claim that the laws of physics have finite complexity.

Chess is a complicated game, but it is also a game with perfect information, no possibility of cooperation, and countably few tactical options. It is not clear that any of these limitations apply to the 'game' of outwitting the laws of physics.

See also http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=1%23696 for sociological reasons why the Singularity might not happen, even if the laws of physics allow for it.

Joshua said...

Raphael,

My attitude towards pragmatic belief in religion varies daily although you are correct that it is generally negative.

To address that issue briefly, I'm not in general convinced by claims that religion is pragmatically useful. For example, there's evidence that even just thinking about religion increases outgroup v. ingroup awareness. See "Priming Christian Religious Concepts Increases Racial Prejudice" http://spp.sagepub.com/cgi/content/abstract/1/2/119

Generally, I'm not convinced that in the long run any deliberately counterfactual belief can be pragmatically useful. Eventually, reality is going to bite you. Moreover, I personally would rather have truth in hell than lies in heaven. We should value truth.

Moreover, trying to believe in a counterfactual statement preps our minds for bad thinking. For example there's a heavy correlation between belief in 9/11 conspiracies and belief that the moon landings were a hoax. Prepping one's mind to not evaluate evidence well cannot be easily compartmentalized.

Note also that that there's something patronizing and paternalistic about promoting beliefs for a pragmatic reason. Unless one can brainwash oneself knowing that one is brainwashing oneself (some people seem to be able to do so, but I don't fully understand how this works. They may just be convinced that they have brainwashed themselves. Not the same thing), this means that one will essentially be part of an elite which promotes belief among the masses.

How does all this apply to Singularitism? Well, one must ask, is it more ok to assign higher probability to a belief than is justified by evidence? I'm not sure. If one is a reasonable individual, then all beliefs are essentially probabilistic at some level. So there isn't much distinction between going out of one's way to believe a low probability claim and going out of one's way to believe a very low probability claim. (This argument may break down. There still seems to be something inherently different between pragmatic belief in God and pragmatic belief in say a flat earth. So I'm not sure I totally buy into this argument.) Even if Singularitism is of a higher probability than the existence of a deity or of some sort melioristic order to the universe then that difference may still not be relevant. Singularitism, especially in forms that have optimistic views about friendly AI may in fact be a form of meliorism. If that is the case, then meliorism in the general case must be more likely than Singularitism since one is a subset of the other. But, if one makes a definitional exclusion of Singularitism from meliorism, then I consider Singularitism more likely than meliorism.

I'm also not sure that specific belief that we won't end in something like nuclear war is a good thing. Humans are far too optimistic. Spending time worrying about the negative possibilities helps make those possibilities less likely. Ignoring them or kidding ourselves about their probabilities does not in general reduce the probabilities.

Also, belief or promotion in the Singularity may distract from other technologies. If one disagrees with my assessment that the techs that singularitans want us to research are techs that we should be researching then one might think that Singularity hype is resulting in more resources going into suboptimal areas (maybe more money is going into AI that instead should go into superconductors, space elevators, quantum computers, or cancer cures. (This may be a weak argument given the possibility of some fields being near saturation for the resources we put into them. There may be issues due to diminishing marginal returns. However, this is probably not the case for all interesting areas of research).

The upshot is that I'd rather not have people trying to pragmatically believe in a Singularity. But I don't think that such belief is as bad as pragmatic religious belief.

Joshua said...

Kurt, yes, Kurzweil's ideas about medicine are one of the most embarrassing things about the Singularity/transhumanist movement. They border on classical medical quackery. Frankly, I find Kurzweil to be a sloppy thinker in general. An earlier draft of this blog post said "And to be sure, there are some very smart people such as Eliezer Yudkowsky and Ray Kurzweil who take the Singularity very seriously." In drafting I removed Kurzweil. I could possibly spend a few hours just going through everything I found highly questionable in Kurzweil's "The Singularity is Near" but that wouldn't be a useful way of spending my time. Yudkowsky by contrast is a much more careful thinker (although it has been a week since he lasted updated Harry Potter and the Methods of Rationality which makes me annoyed).

KD,

I'm not sure any of those conditions are relevant. I can without too much effort construct games without perfect information, possibility of cooperation, and uncountably many tactical options and still no possibility of winning. It is actually more relevant that I can do so with perfect information games meeting the same tactics. Note that most generously, if everyone is cooperating, we can regard them in some sense as a single player for deciding the upper limits of what is possible.

I find sociological reasons against the singularity deeply unpersuasive (as I think you and I have discussed before). Sociological reasons are arguments against a near Singularity, or against it being a low probability event at any given time. They are not an argument against the Singularity itself.

Khagan Din said...

"I can without too much effort construct games without perfect information, possibility of cooperation, and uncountably many tactical options and still no possibility of winning."

Would you, please? Even just a sketch would be very interesting.

Khagan Din said...

"Sociological reasons are arguments against a near Singularity, or against it being a low probability event at any given time. They are not an argument against the Singularity itself."

Sure, that makes sense. I mostly agree. If what you mean is that technological progress, in theory, could accelerate to a point beyond which we would be unable to meaningfully predict its consequences, then sociological counter-arguments are not so much unpersuasive as irrelevant. My point is simply that sociology reduces the probability that we will actually *reach* the Singularity in the next 100 years or so. For all I know, the Singularity still exists.

Joshua said...

KD,

Let's take a game that you and I've played before, Pandemic. Will add a much larger set location but those locations don't have associated cards, just outbreak potential from diseases. Moreover, they'll not make any paths that are any shorter. And we will allow that when you remove a a disease block in one of the normal locations, you can also remove partial disease blocks from these extra locations. You can remove this extra disease removing up to a total of no more than 1 full disease block in these locations (but can pick separate real numbers so say might remove 1/3rd of a block in one of the new locations and 2/3rs in another). Obviously, there are uncountably many possible moves as long as there is some disease in the new locations (so maybe start with some disease in those locations). It is also clear that winning this game is at least as hard as a normal game of Pandemic. That is, a non-winnable Pandemic game when modified this way will still be nonwinnable. So if one has the cards arranged accordingly into a non-winnable situation this is an example.

Anonymous said...

Your second point in this post seems simply irrelevant to me. A lot of people have suggested that we may be getting close to the physical limits on microchip size, for instance, but the singularity notion is fundamentally software-based. Sure, a working "theory of everything" is unlikely to improve things much, but it's clear by comparison to, well, ourselves, that our ability to create or manipulate complex systems to do what we want, is nowhere near what is actually possible, regardless of the physics. What you suggest doesn't seem to at all rule out significant improvements to the "software" of existing systems (e.g. biology) even if no significant improvements to the hardware are developed.

Joshua said...

One problem with discussing a notion like that of the Singularity is that there seem to be many different versions floating around. For example, Kurzweil places a lot of emphasis on technological improvement in his writing.

Yudkowsky places much less emphasis on that aspect and much more on the software. But ultimately, there are still limits to software. Even if we can create smart general AI, if they can't find ways to self-improve the underlying physical technology then they are left only with algorithmic improves. We don't, a priori, have any reason to believe that substantial algorithmic improves are possible. For example, if P != NP in a strong sense then there may be fundamental limits to how efficient algorithms can be. Limits on physical computation capability and limits on algorithmic computation ability amount to similar issues.

Note also that the scale of improvement matters. I suspect for example that within 20 years we will have genetic engineering to make people smarter and that within 30 years that engineering will include genes that aren't normally in the human gene pool. I also suspect that we will develop fairly smart general AI although I don't have any idea under what timeline that will occur. But none of these force a Singularity.

Anonymous said...

Ah yes, I meant to ask which notion of singularity you were using. Have you seen Yudkowsky's breakdown of them?

Joshua said...

No, I have not seen that breakdown before. Once again, Yudkowsky seems to be one of the clearer thinkers on these matters.

Objection 1 (regarding increased resources for the same level of progress) is at some level an objection to all three types of Singularities. I would note that Kurzweil's version of the Singularity seems to be a combination of Accelerating Change and Intelligence Explosion (at least both core claims). At least in his book The Singularity is Near, he emphasizes on the major direct improvements smart general AIs would be able to make to their hardware. The second critique is largely a critique directed at that Kurzweil-form although it also becomes relevant when discussing a Singularity of the third type in so far as instead of physical limits one would likely have mathematical limits.

I wish I had seen this breakdown before I wrote this piece. Yudkowsky has once again done a very good job of clarifying a number of issues.

Joshua said...

Also, I think that the issue of increasing resources needed to make the same breakthroughs also to some extent directly goes against the core claim of the intelligence explosion model.

Anonymous said...

So you're suggesting something like perhaps AI will not result in a singularity, but in the aversion of an anti-singularity? :)

Anonymous said...

And no, don't ask me what I actually mean by that, because that was more a joke than a well-thought-out statement.

Khagan Din said...

Thanks for the Pandemic game. I'll chew on that.

I think the three Singularity theories, if edited for hubris, are just increasingly specific subsets of each other. Right, like Moore's Law predicts, roughly, that there will be an exponential-like increase in technology. The Event Horizon theory basically adopts the exponential curve, but further predicts that beyond a certain point on this curve we might get some strange results, because superhuman intelligences are hard to predict. The Intelligence Explosion theory basically adopts both the curve and a fuzzy region near the end of the curve, but further predicts that one thing that will happen in the fuzzy region is that intelligence will very quickly increase by several orders of magnitude.

The EH theory is still predicting increasingly rapid growth in intelligence after superhuman intelligence is created, as required by the first theory; it's just pointing out that we can't guess now what sorts of things will be *done* with that intelligence.

The IE theory is still ignorant about *most* of what happens after superhuman intelligences are created, as required by the EH theory...it just adds one narrow prediction.

Thus we should not be surprised if Joshua's objections apply to all three of these theories. Anything that interferes with Moore's Law will interfere with all three theories.

Mitchell said...

"There's a third problem with Singularitism beyond issues of plausibility: It doesn't tell us what to do today."

In fact it does. If greater-than-human intelligence is coming, and if (all else equal) greater intelligence defeats lesser intelligence in any conflict of aims, and if the values of an AI are quite contingent... then we should be trying to ensure that the values of the first transhuman AI are human-friendly, or we are liable to be steamrolled. Thus, the quest for "Friendly AI".

Joshua said...

Mitchell,

Ok. It might be more accurate to say that most general forms of the Singularity don't tell us much. Even then, the claim you've made isn't intrinsically connected to the Singularity. For example, a general AI that's not that bright(say as smart as me) but thinks a hundred times as fast as I do is going to pose a lot of risk (roughly speaking. Processing speed isn't everything. As someone once noted, a dog that thinks a million times as fast just takes one millionth as much time to decide to sniff your crotch). I don't need Singularity ideas to tell me about that.

Moreover, the idea that we're going to get smart AI quickly with no intermediaries is uncalled for. It is extremely unlikely that the first general AI are smart enough to self-improve (a general AI that roughly approximates a normal human for example simply won't know enough about itself to self-improve).

But overall, a reasonably smart AI poses a serious risk whether or not it give can engage in massive self-improvement. (To use an example I was discussing with Sniffnoy earlier, consider an AI that has internet access and finds a practical algorithm for fast factoring.)

Anonymous said...
This comment has been removed by a blog administrator.