The Fermi Paradox is a classic puzzle proposed by Enrico Fermi. Fermi observed that if one made back of the envelope calculations of the sort for which Fermi was famous, then one would expect to see much intelligent life out in space. Moreover, it doesn't take a society much more advanced than our own before one is likely to see direct evidence of its existence. So where is everyone?
One proposal to explain this apparent paradox is Robin Hanson's explanation that there is some "Great Filter" which culls species before they can reach the degree of civilization necessary to spread out to the stars on a large scale. . Various roadblocks and events can act as filters. For example, severe asteroid impacts every few million years set life back. However, that seems to be a rare and weak filtration effect. One obvious roadblock is the arrival of life itself. Life arising may be much more difficult than we expect, and thus life may be comparatively rare. But, life arose fairly early in this planet's history, rendering this claim unlikely.
The most disturbing possibility, and the one on which both Robin Hanson and Nick Bostrom have focused, is the possibility that, for us, most of the filter lies not in our past, but in our future. This is scary. Events which result in the complete destruction of humanity are described as existential risk. If such events lie in our future, they are not likely due to natural causes such as asteroid impact and gamma ray bursts, since such events are rare. Existential risk to us is more likely the result of dangerous technologies. In a similar vein, during the Cold War, Carl Sagan worried that the apparent absence of life in the universe might be due to every advanced society having nuked itself. In a post Cold War world, that particular worry seems to be less severe. However, Hanson and others have focused on other technologies, especially those arising from nanotechnology and rogue AI.
I am not that worried by the Great Filter. I suspect that the vast majority of Great Filter is behind us. One of the most obvious filtration points are the steps from a species being smart to that species having civilization capable of making sustained technological progress. On Earth, there are many extremely smart species that are almost as smart as humans. Lots of people know that other primates are smart and will name dolphins and elephants as other very species. But there are many others as well, especially birds. Keas, African Grey Parrots, and ravens are only three of the many examples. Almost every species of corvid is extremely bright, and is capable of puzzle solving that rivals that of human children. However, the steps from there to sustained civilization are clearly large. Only a single species developed language, and even after that point, we stagnated for hundreds of thousands of years before developing writing, which is when things really started to take off. So, it seems to me that we can plausibly point to a large filtration step just before the development of civilization.
There are other points which have been proposed as filtration points in the development of life as well. One common argument is the Rare Earth Hypothesis which posits that the existence and success of life on Earth required a large variety of different conditions. For example, Earth has a large moon which helps protect the planet from asteroid strikes. For most of the features frequently cited as part of Earth's rare nature we don't seem to have enough data at this point to reasonably judge how common such features are or how necessary they are for complex life. However, even neglecting the Rare Earth filtration effects, the pre-civilization filtration still seems large.
Moreover, many of exotic anthropogenic events can be safely ruled out as major aspects of the Great Filter. The most plausible anthropogenic events are rogue AIs, false vacuum collapse, bad nanotech, and severe environmental damage with accompanying loss of natural resources.
Rogue AIs are an unlikely scenario because it is unlikely that any AI would be bad enough to wipe out the creating species and then not quickly take large scale control over much of their surrounding space.[1] Thus, if societies are being destroyed by rogue AIs we should be able to see this. Moreover, we should exect our own solar system to have long since come under sway of such AI. Thus, we can safely rule out rogue AI as a major part of the filter.
Similarly, some physicists have proposed that space as we know it is a "false vacuum". While the technical details are complicated, the essential worry is that a sufficiently advanced particle accelerator or similar device could cause space as we know it to be replaced by space that behaves fundamentally differently than what we are used to. The new space would expand at the speed of light.
We don't need to worry about civilizations probing the nature of space to cause a collapse of the false vacuum. If there were a lot of civilizations doing this, we wouldn't be here to notice. It is remotely plausible that the new vacuum would expand slower than the speed of light. If for example, the new type of vacuum expanded at a millionth of the speed of light, that would be enough to quickly destroy any single-planet civilization that triggered such an event, but would be slow enough to take a very long time to spread before it became noticed by another civilizations. However, our current understanding of the laws of physics make it hard to see how a vacuum collapse could occur at less than the speed of light. So we can rule this out as a major part of the Great Filter.
Nanotechnology is one of the most plausible options for a section of the Great Filter in front of us for the simple reason that severe nanotech events don't create results that will destroy or alter nearby stars or the like. While there are a variety of nanotech disaster scenarios, they essentially revolve around some form of out of control replicator consuming resources that humans need to survive or disrupting the ecosystem so much that we cannot survive. If a nearby solar system had a severe nanotech disaster, we wouldn't be able to tell. This situation is similar to Sagan's nuclear war scenario in that it allows civilizations to frequently wipe themselves out in a way that we can't easily observe.
Environmental damage and overconsumption of resources is another possible problem. It is possible that species exhaust their basic resources before they become technologically advanced. If, for example, humanity ran out of all fossil fuels without adequate replacements, this could prevent further expansion. However, this seems to be an unlikely explanation for Fermi's paradox. Even extreme resource consumption and environmental damage is unlikely to result in the complete destruction of an intelligent species. This possibility is the modern equivalent of the Sagan concern about nuclear war, a possibility which gets undue attention due to the current political climate.
So, it seems likely that most of the Great Filter is behind us. However, this is not a cause for complacency. First, the argument that the Great Filter is behind us is a weak one. As long as our sample of civilizations remains a single civilization, we cannot do more than make very rough estimates. Moreover, even if most of the Great Filter is behind us, that doesn't imply that we are necessarily paying enough attention to existential risk. Even back of the envelope calculations suggest that we aren't putting enough resources into dealing with existential risk threats, whether natural or caused by humans.
What needs to be done? First, we need to get a better idea where filtration steps actually exist. The most obvious way to do that is to look for life on other planets. If we don't find any life on other bodies in the solar system, then that increases the chance that a large part of the filtration is overcome by life arising and so we can breathe more easily. If however, we find life elsewhere, especially complex life, this gives us increased reason to think that the filter is ahead of us.
Second, we need to put more resources into dealing with existential risks. One excellent recent step was NASA's WISE mission which looked for asteroids likely to impact the Earth. We're now tracking a lot more of the near Earth asteroids and are probably tracking all of the asteroids that are both large and likely to intersect Earth orbit. At present, we're paying very little attention to human-caused catastrophic risk events. Catastrophic AI seems unlikely, but it is clear that little attention is being paid to the issue. Similar observations apply to nanotech and other concerns. More resources should be devoted to examining these dangers before the technologies become fully developed by which time it may be too late.
Unfortunately, there's a tendency to dismiss risks that appear in popular science fiction precisely because they appear in such works. This is just as bad as using fictional works as a reason to eschew a technology. Moreover, humans have a lot of trouble thinking about large scale problems, and the scale of a problem doesn't get much larger than the complete destruction of humanity.
So overall, the Great Filter doesn't worry me too much. But, even without the threat of the Great Filter, we still aren't doing enough to deal with the big risks to our existence. If most of the Great Filter is behind us, it would be all the more tragic if humanity were to be destroyed now, when we are but a few generations of spreading beyond our planet.
[1] I thought that this point might be original to me, but while writing this blog entry I found that it has been made before. See, e.g. Katja Grace's remarks here.
Tuesday, July 5, 2011
Subscribe to:
Posts (Atom)