Showing posts with label Eliezer Yudkowsky. Show all posts
Showing posts with label Eliezer Yudkowsky. Show all posts

Sunday, May 29, 2011

Harry Potter and the Methods of Rationality

I've mentioned Harry Potter and the Methods of Rationality before. It is a Harry Potter fanfiction written by Eliezer Yudkowsky. The central premise of the work is that Harry instead of having abusive step-parents has loving step-parents and his step-father is a scientist. Young Harry grows up learning all about the scientific method, critical thinking, and cognitive biases. HPMR does have its positives and negatives. Overall it is hilarious but there are times when Harry is didactic and Yudkowsky has clear difficulty in making his characters sound like eleven year olds. But overall, it is worth reading. I am recommending the fiction now for two reasons. The fiction has recently become the most reviewed fanfiction on fanfiction.net. Second, for the last Vericon masquerade a friend and I cosplayed as the versions of Harry and Hermione from HPMR. Pictures can be found at her blog. Note that the costumes were not made entierly by us. The badges were made by Ellen Dimiduk who does excellent costuming work. Now, Yudkowksy has a policy that people who make cool artwork about the story get cameos in the story. So the latest chapter of HPMR mentions two Hogwarts students, Katarina and Joshua, who helped make costumes for Hogwarts students. So of course people need to go read it now since I'm a character! So if you aren't reading it yet, go and read.

Monday, April 19, 2010

On the Coming Singularity

Much has been said in the last few years about an approaching technological Singularity, beyond which humans or humans' descendants will be so far beyond anything we understand today that comparisons would be meaningless. I do not believe that the Singularity is imminent.

What do people mean when they speak of the Singularity? There are a variety of such notions, but most versions of the Singularity focus on self-improving artificial intelligences. The central idea is that humans will not only construct functioning artificial intelligences, but that such AIs will be smarter than humans. Given such entities, technological progress will increase rapidly as the AIs make discoveries and inventions that humans would not. This effect will be self-reinforcing as each successive improvement makes the AIs smarter. There are variations of this idea: Other Singularity proponents, generally described as Transhumanists emphasize genetic engineering of humans or emphasize direct interfaces between the human brain and computers. I am skeptical of a Singularity occurring in the near future.

Certainly Singularitarism is seductive. Variations of it make for great science fiction (Charlie Stross' Eschaton is an excellent example) and some version of the Singularity, especially those that involve humans being downloaded into immortal computers or the like, are appealing. Singularitarism may sometimes border on a religion, but it has the virtue of a minimally plausible eschatology, one that doesn't require the intervention of tribal deities, just optimistic estimates for technological and scientific progress. And to be sure, there are some very smart people such as Eliezer Yudkowsky who take the Singularity very seriously.

The most common criticism of Singularitarism is that we will not develop effective AIs. This argument is unpersuasive. There's no intrinsic physical law against developing AIs; we are making slow but steady progress; and we know that intelligences are already possible under the laws of the universe.We're an example.

While I reject most of the common criticisms of a coming Singularity, I am nevertheless skeptical of the idea for two reasons. First, while human understanding of science and technology has been improving over the last few hundred years, the level of resources it takes today to produce the same increase in understanding has increased dramatically. For example, in the mid 19th century a few scientists could work out major theories about nature, such as the basics of evolution and electromagnetism. Now, however, most major scientific fields have thousands of people working in them, and yet the progress is slow and incremental. There seems to be a meta-pattern that as we learn more we require correspondingly more resources to make corresponding levels of progress. Thus, even if we develop smart AIs, they may not lead to sudden technological progress.

Second, we may simply be close to optimizing our understanding of the laws of physics for technological purposes. Many of the technologies we hope to develop may be intrinsically impractical or outright impossible. There may be no room-temperature superconductors. There may be no way to make a practical fusion reactor. As Matt Springer suggested (here and here), we might activate our supersmart AI, and then it may say "You guys seem to have thought things through pretty well. I don't have much to add." This seems to be a common problem with Singularity proponents. It is a common argument by Singularitarians that essentially all challenges can be solved by sufficient intelligence. I've personally seen this argument made multiple times by Singularitarians discussing faster-than-light travel. But if it isn't allowed by the laws of physics than there's nothing we can do. If in a chess game white can force a checkmate in 3 moves, it doesn't matter how smart black is. They'll still lose. No matter how smart we are, if the laws of physics don’t allow something then we won’t be able to do that thing, any more than black will be able to prevent a checkmate by white.

There's a third problem with Singularitarism beyond issues of plausibility: It doesn't tell us what to do today. Even if no one had ever come with the Singularity, we'd still be investigating AI, brain-computer interfaces, and genetic engineering. They are all interesting technologies with potentially have major applications to help us answer fundamental questions about human nature. So in that regard, the Singularity as a concept is unhelpful: It might happen. It might not happen. But it tells us very little about what we should do now.