Much has been said in the last few years about an approaching technological Singularity, beyond which humans or humans' descendants will be so far beyond anything we understand today that comparisons would be meaningless. I do not believe that the Singularity is imminent.
What do people mean when they speak of the Singularity? There are a variety of such notions, but most versions of the Singularity focus on self-improving artificial intelligences. The central idea is that humans will not only construct functioning artificial intelligences, but that such AIs will be smarter than humans. Given such entities, technological progress will increase rapidly as the AIs make discoveries and inventions that humans would not. This effect will be self-reinforcing as each successive improvement makes the AIs smarter. There are variations of this idea: Other Singularity proponents, generally described as Transhumanists emphasize genetic engineering of humans or emphasize direct interfaces between the human brain and computers. I am skeptical of a Singularity occurring in the near future.
Certainly Singularitarism is seductive. Variations of it make for great science fiction (Charlie Stross' Eschaton is an excellent example) and some version of the Singularity, especially those that involve humans being downloaded into immortal computers or the like, are appealing. Singularitarism may sometimes border on a religion, but it has the virtue of a minimally plausible eschatology, one that doesn't require the intervention of tribal deities, just optimistic estimates for technological and scientific progress. And to be sure, there are some very smart people such as Eliezer Yudkowsky who take the Singularity very seriously.
The most common criticism of Singularitarism is that we will not develop effective AIs. This argument is unpersuasive. There's no intrinsic physical law against developing AIs; we are making slow but steady progress; and we know that intelligences are already possible under the laws of the universe.We're an example.
While I reject most of the common criticisms of a coming Singularity, I am nevertheless skeptical of the idea for two reasons. First, while human understanding of science and technology has been improving over the last few hundred years, the level of resources it takes today to produce the same increase in understanding has increased dramatically. For example, in the mid 19th century a few scientists could work out major theories about nature, such as the basics of evolution and electromagnetism. Now, however, most major scientific fields have thousands of people working in them, and yet the progress is slow and incremental. There seems to be a meta-pattern that as we learn more we require correspondingly more resources to make corresponding levels of progress. Thus, even if we develop smart AIs, they may not lead to sudden technological progress.
Second, we may simply be close to optimizing our understanding of the laws of physics for technological purposes. Many of the technologies we hope to develop may be intrinsically impractical or outright impossible. There may be no room-temperature superconductors. There may be no way to make a practical fusion reactor. As Matt Springer suggested (here and here), we might activate our supersmart AI, and then it may say "You guys seem to have thought things through pretty well. I don't have much to add." This seems to be a common problem with Singularity proponents. It is a common argument by Singularitarians that essentially all challenges can be solved by sufficient intelligence. I've personally seen this argument made multiple times by Singularitarians discussing faster-than-light travel. But if it isn't allowed by the laws of physics than there's nothing we can do. If in a chess game white can force a checkmate in 3 moves, it doesn't matter how smart black is. They'll still lose. No matter how smart we are, if the laws of physics don’t allow something then we won’t be able to do that thing, any more than black will be able to prevent a checkmate by white.
There's a third problem with Singularitarism beyond issues of plausibility: It doesn't tell us what to do today. Even if no one had ever come with the Singularity, we'd still be investigating AI, brain-computer interfaces, and genetic engineering. They are all interesting technologies with potentially have major applications to help us answer fundamental questions about human nature. So in that regard, the Singularity as a concept is unhelpful: It might happen. It might not happen. But it tells us very little about what we should do now.