tag:blogger.com,1999:blog-6883415296937284014.post1976329507104028646..comments2024-01-08T02:16:57.647-08:00Comments on Religion, Sets, and Politics: On the Coming SingularityJoshuahttp://www.blogger.com/profile/00637936588223855248noreply@blogger.comBlogger20125tag:blogger.com,1999:blog-6883415296937284014.post-28606832446567442402010-06-17T09:49:42.343-07:002010-06-17T09:49:42.343-07:00This comment has been removed by a blog administrator.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-85702991182004137412010-05-01T23:11:28.088-07:002010-05-01T23:11:28.088-07:00Mitchell,
Ok. It might be more accurate to say th...Mitchell,<br /><br />Ok. It might be more accurate to say that most general forms of the Singularity don't tell us much. Even then, the claim you've made isn't intrinsically connected to the Singularity. For example, a general AI that's not that bright(say as smart as me) but thinks a hundred times as fast as I do is going to pose a lot of risk (roughly speaking. Processing speed isn't everything. As someone once noted, a dog that thinks a million times as fast just takes one millionth as much time to decide to sniff your crotch). I don't need Singularity ideas to tell me about that.<br /><br />Moreover, the idea that we're going to get smart AI quickly with no intermediaries is uncalled for. It is extremely unlikely that the first general AI are smart enough to self-improve (a general AI that roughly approximates a normal human for example simply won't know enough about itself to self-improve). <br /><br />But overall, a reasonably smart AI poses a serious risk whether or not it give can engage in massive self-improvement. (To use an example I was discussing with Sniffnoy earlier, consider an AI that has internet access and finds a practical algorithm for fast factoring.)Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-44282153991928563752010-04-30T20:49:34.030-07:002010-04-30T20:49:34.030-07:00"There's a third problem with Singulariti..."There's a third problem with Singularitism beyond issues of plausibility: It doesn't tell us what to do today."<br /><br />In fact it does. If greater-than-human intelligence is coming, and if (all else equal) greater intelligence defeats lesser intelligence in any conflict of aims, and if the values of an AI are quite contingent... then we should be trying to ensure that the values of the first transhuman AI are human-friendly, or we are liable to be steamrolled. Thus, the quest for "Friendly AI".Mitchellhttps://www.blogger.com/profile/10768655514143252049noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-78215695891566607862010-04-22T09:52:19.605-07:002010-04-22T09:52:19.605-07:00Thanks for the Pandemic game. I'll chew on th...Thanks for the Pandemic game. I'll chew on that.<br /><br />I think the three Singularity theories, if edited for hubris, are just increasingly specific subsets of each other. Right, like Moore's Law predicts, roughly, that there will be an exponential-like increase in technology. The Event Horizon theory basically adopts the exponential curve, but further predicts that beyond a certain point on this curve we might get some strange results, because superhuman intelligences are hard to predict. The Intelligence Explosion theory basically adopts both the curve and a fuzzy region near the end of the curve, but further predicts that one thing that will happen in the fuzzy region is that intelligence will very quickly increase by several orders of magnitude. <br /><br />The EH theory is still predicting increasingly rapid growth in intelligence after superhuman intelligence is created, as required by the first theory; it's just pointing out that we can't guess now what sorts of things will be *done* with that intelligence.<br /><br />The IE theory is still ignorant about *most* of what happens after superhuman intelligences are created, as required by the EH theory...it just adds one narrow prediction.<br /><br />Thus we should not be surprised if Joshua's objections apply to all three of these theories. Anything that interferes with Moore's Law will interfere with all three theories.Khagan Dinhttps://www.blogger.com/profile/06227930254843038732noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-40932270820613534652010-04-21T19:25:23.942-07:002010-04-21T19:25:23.942-07:00And no, don't ask me what I actually mean by t...And no, don't ask me what I actually mean by that, because that was more a joke than a well-thought-out statement.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-72027044510714518282010-04-21T19:23:52.914-07:002010-04-21T19:23:52.914-07:00So you're suggesting something like perhaps AI...So you're suggesting something like perhaps AI will not result in a singularity, but in the aversion of an anti-singularity? :)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-84137985034528222482010-04-21T15:26:06.836-07:002010-04-21T15:26:06.836-07:00Also, I think that the issue of increasing resourc...Also, I think that the issue of increasing resources needed to make the same breakthroughs also to some extent directly goes against the core claim of the intelligence explosion model.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-69696031991669573262010-04-21T15:24:34.287-07:002010-04-21T15:24:34.287-07:00No, I have not seen that breakdown before. Once ag...No, I have not seen that breakdown before. Once again, Yudkowsky seems to be one of the clearer thinkers on these matters.<br /><br />Objection 1 (regarding increased resources for the same level of progress) is at some level an objection to all three types of Singularities. I would note that Kurzweil's version of the Singularity seems to be a combination of Accelerating Change and Intelligence Explosion (at least both core claims). At least in his book The Singularity is Near, he emphasizes on the major direct improvements smart general AIs would be able to make to their hardware. The second critique is largely a critique directed at that Kurzweil-form although it also becomes relevant when discussing a Singularity of the third type in so far as instead of physical limits one would likely have mathematical limits.<br /><br />I wish I had seen this breakdown before I wrote this piece. Yudkowsky has once again done a very good job of clarifying a number of issues.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-36045024565778799882010-04-21T15:07:27.096-07:002010-04-21T15:07:27.096-07:00Ah yes, I meant to ask which notion of singularity...Ah yes, I meant to ask which notion of singularity you were using. Have you seen <a href="http://yudkowsky.net/singularity/schools" rel="nofollow">Yudkowsky's breakdown of them</a>?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-61114472966872335462010-04-21T14:14:13.099-07:002010-04-21T14:14:13.099-07:00One problem with discussing a notion like that of ...One problem with discussing a notion like that of the Singularity is that there seem to be many different versions floating around. For example, Kurzweil places a lot of emphasis on technological improvement in his writing.<br /><br />Yudkowsky places much less emphasis on that aspect and much more on the software. But ultimately, there are still limits to software. Even if we can create smart general AI, if they can't find ways to self-improve the underlying physical technology then they are left only with algorithmic improves. We don't, a priori, have any reason to believe that substantial algorithmic improves are possible. For example, if P != NP in a strong sense then there may be fundamental limits to how efficient algorithms can be. Limits on physical computation capability and limits on algorithmic computation ability amount to similar issues. <br /><br />Note also that the scale of improvement matters. I suspect for example that within 20 years we will have genetic engineering to make people smarter and that within 30 years that engineering will include genes that aren't normally in the human gene pool. I also suspect that we will develop fairly smart general AI although I don't have any idea under what timeline that will occur. But none of these force a Singularity.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-20306048180061376172010-04-21T13:59:56.781-07:002010-04-21T13:59:56.781-07:00Your second point in this post seems simply irrele...Your second point in this post seems simply irrelevant to me. A lot of people have suggested that we may be getting close to the physical limits on microchip size, for instance, but the singularity notion is fundamentally software-based. Sure, a working "theory of everything" is unlikely to improve things much, but it's clear by comparison to, well, ourselves, that our ability to create or manipulate complex systems to do what we want, is nowhere near what is actually possible, regardless of the physics. What you suggest doesn't seem to at all rule out significant improvements to the "software" of existing systems (e.g. biology) even if no significant improvements to the hardware are developed.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-62774549242668629272010-04-21T09:46:16.508-07:002010-04-21T09:46:16.508-07:00KD,
Let's take a game that you and I've p...KD,<br /><br />Let's take a game that you and I've played before, Pandemic. Will add a much larger set location but those locations don't have associated cards, just outbreak potential from diseases. Moreover, they'll not make any paths that are any shorter. And we will allow that when you remove a a disease block in one of the normal locations, you can also remove partial disease blocks from these extra locations. You can remove this extra disease removing up to a total of no more than 1 full disease block in these locations (but can pick separate real numbers so say might remove 1/3rd of a block in one of the new locations and 2/3rs in another). Obviously, there are uncountably many possible moves as long as there is some disease in the new locations (so maybe start with some disease in those locations). It is also clear that winning this game is at least as hard as a normal game of Pandemic. That is, a non-winnable Pandemic game when modified this way will still be nonwinnable. So if one has the cards arranged accordingly into a non-winnable situation this is an example.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-87022037660035926932010-04-21T01:31:19.101-07:002010-04-21T01:31:19.101-07:00"Sociological reasons are arguments against a..."Sociological reasons are arguments against a near Singularity, or against it being a low probability event at any given time. They are not an argument against the Singularity itself."<br /><br />Sure, that makes sense. I mostly agree. If what you mean is that technological progress, in theory, could accelerate to a point beyond which we would be unable to meaningfully predict its consequences, then sociological counter-arguments are not so much unpersuasive as irrelevant. My point is simply that sociology reduces the probability that we will actually *reach* the Singularity in the next 100 years or so. For all I know, the Singularity still exists.Khagan Dinhttps://www.blogger.com/profile/06227930254843038732noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-16192637851134214552010-04-21T01:27:57.120-07:002010-04-21T01:27:57.120-07:00"I can without too much effort construct game..."I can without too much effort construct games without perfect information, possibility of cooperation, and uncountably many tactical options and still no possibility of winning."<br /><br />Would you, please? Even just a sketch would be very interesting.Khagan Dinhttps://www.blogger.com/profile/06227930254843038732noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-51097254466250037462010-04-20T19:57:43.865-07:002010-04-20T19:57:43.865-07:00Kurt, yes, Kurzweil's ideas about medicine are...Kurt, yes, Kurzweil's ideas about medicine are one of the most embarrassing things about the Singularity/transhumanist movement. They border on classical medical quackery. Frankly, I find Kurzweil to be a sloppy thinker in general. An earlier draft of this blog post said "And to be sure, there are some very smart people such as Eliezer Yudkowsky and Ray Kurzweil who take the Singularity very seriously." In drafting I removed Kurzweil. I could possibly spend a few hours just going through everything I found highly questionable in Kurzweil's "The Singularity is Near" but that wouldn't be a useful way of spending my time. Yudkowsky by contrast is a much more careful thinker (although it has been a week since he lasted updated Harry Potter and the Methods of Rationality which makes me annoyed).<br /><br />KD,<br /><br />I'm not sure any of those conditions are relevant. I can without too much effort construct games without perfect information, possibility of cooperation, and uncountably many tactical options and still no possibility of winning. It is actually more relevant that I can do so with perfect information games meeting the same tactics. Note that most generously, if everyone is cooperating, we can regard them in some sense as a single player for deciding the upper limits of what is possible. <br /><br />I find sociological reasons against the singularity deeply unpersuasive (as I think you and I have discussed before). Sociological reasons are arguments against a near Singularity, or against it being a low probability event at any given time. They are not an argument against the Singularity itself.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-91512970815598332082010-04-20T19:45:15.251-07:002010-04-20T19:45:15.251-07:00Raphael,
My attitude towards pragmatic belief in ...Raphael,<br /><br />My attitude towards pragmatic belief in religion varies daily although you are correct that it is generally negative.<br /><br />To address that issue briefly, I'm not in general convinced by claims that religion is pragmatically useful. For example, there's evidence that even just thinking about religion increases outgroup v. ingroup awareness. See "Priming Christian Religious Concepts Increases Racial Prejudice" http://spp.sagepub.com/cgi/content/abstract/1/2/119<br /><br />Generally, I'm not convinced that in the long run any deliberately counterfactual belief can be pragmatically useful. Eventually, reality is going to bite you. Moreover, I personally would rather have truth in hell than lies in heaven. We should value truth. <br /><br />Moreover, trying to believe in a counterfactual statement preps our minds for bad thinking. For example there's a heavy correlation between belief in 9/11 conspiracies and belief that the moon landings were a hoax. Prepping one's mind to not evaluate evidence well cannot be easily compartmentalized.<br /><br />Note also that that there's something patronizing and paternalistic about promoting beliefs for a pragmatic reason. Unless one can brainwash oneself knowing that one is brainwashing oneself (some people seem to be able to do so, but I don't fully understand how this works. They may just be convinced that they have brainwashed themselves. Not the same thing), this means that one will essentially be part of an elite which promotes belief among the masses. <br /><br />How does all this apply to Singularitism? Well, one must ask, is it more ok to assign higher probability to a belief than is justified by evidence? I'm not sure. If one is a reasonable individual, then all beliefs are essentially probabilistic at some level. So there isn't much distinction between going out of one's way to believe a low probability claim and going out of one's way to believe a very low probability claim. (This argument may break down. There still seems to be something inherently different between pragmatic belief in God and pragmatic belief in say a flat earth. So I'm not sure I totally buy into this argument.) Even if Singularitism is of a higher probability than the existence of a deity or of some sort melioristic order to the universe then that difference may still not be relevant. Singularitism, especially in forms that have optimistic views about friendly AI may in fact be a form of meliorism. If that is the case, then meliorism in the general case must be more likely than Singularitism since one is a subset of the other. But, if one makes a definitional exclusion of Singularitism from meliorism, then I consider Singularitism more likely than meliorism. <br /><br />I'm also not sure that specific belief that we won't end in something like nuclear war is a good thing. Humans are far too optimistic. Spending time worrying about the negative possibilities helps make those possibilities less likely. Ignoring them or kidding ourselves about their probabilities does not in general reduce the probabilities.<br /><br />Also, belief or promotion in the Singularity may distract from other technologies. If one disagrees with my assessment that the techs that singularitans want us to research are techs that we should be researching then one might think that Singularity hype is resulting in more resources going into suboptimal areas (maybe more money is going into AI that instead should go into superconductors, space elevators, quantum computers, or cancer cures. (This may be a weak argument given the possibility of some fields being near saturation for the resources we put into them. There may be issues due to diminishing marginal returns. However, this is probably not the case for all interesting areas of research). <br /><br />The upshot is that I'd rather not have people trying to pragmatically believe in a Singularity. But I don't think that such belief is as bad as pragmatic religious belief.Joshuahttps://www.blogger.com/profile/00637936588223855248noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-25923615709372199562010-04-20T14:35:37.768-07:002010-04-20T14:35:37.768-07:00It's an interesting post. I agree with Josh t...It's an interesting post. I agree with Josh that intelligence may not be of unlimited use if the laws of physics have firm limits, but I would like to see a more in-depth defense of the claim that the laws of physics have finite complexity. <br /><br />Chess is a complicated game, but it is also a game with perfect information, no possibility of cooperation, and countably few tactical options. It is not clear that any of these limitations apply to the 'game' of outwitting the laws of physics.<br /><br />See also http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=1%23696 for sociological reasons why the Singularity might not happen, even if the laws of physics allow for it.Khagan Dinhttps://www.blogger.com/profile/06227930254843038732noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-76167573290996564642010-04-20T10:03:58.900-07:002010-04-20T10:03:58.900-07:00http://www.lef.org/magazine/mag2005/sep2005_report...<a rel="nofollow">http://www.lef.org/magazine/mag2005/sep2005_report_kurzweil_01.htm</a><br /><br />It tells some to take lots of vitamins.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-70913791038851658642010-04-19T18:31:15.311-07:002010-04-19T18:31:15.311-07:00I am very interested in the final argument of the ...I am very interested in the final argument of the critique, insofar as it is interesting to compare the belief in the singularity with William James's "meliorism" (a very limited pragmatic religious faith--believing the universe has a purpose and has some sort of ill-defined order to it because the difference that such a belief makes is (James thinks) reducible to the motivation to act in ways that are good.<br /><br />While I'd imagine you are not inclined to be sympathetic to pragmatist defenses of religious faith--which I've summarized, I must stress, rather weakly--I do wonder whether you think such defenses are adoptable for the singularity (i.e. better to believe that technological progress ends, Asimov style, in the reversal of entropy (let there be light) than in, say, nuclear annihilation.summortushttps://www.blogger.com/profile/09870924799071871322noreply@blogger.comtag:blogger.com,1999:blog-6883415296937284014.post-9373789325683909202010-04-19T18:30:26.824-07:002010-04-19T18:30:26.824-07:00This comment has been removed by the author.summortushttps://www.blogger.com/profile/09870924799071871322noreply@blogger.com