Monday, November 7, 2011
A quick note on log odds
Probabilties are usually numbers that range from 0 to 1. However, the standard way of representing probabilities is not always optimal. However, there is another mapping of probability that goes from negative infinity to infinity. This system called "log odds" has a number of advantages. In the standard log odds approach, one maps the probability of an event x to the quantity log(x/(1-x)). Brian Lee and Jacob Sanders wrote a good summary (pdf) of this system which discusses its advantage and disadvantages. As they observe, use of log odds allows one to immediately see how something like the change in probability from 51% to 52% isn't that big whereas the change from 98% to 99% is a much larger change in the sense that the chance of the event not happening has now halved. Log odds helps makes this sort of intuition immediatelty obvious from the numbers. Brian and Jacob discuss the advantages and disadvantages of log odds in detail, and show how it is particularly useful for doing Bayesian updates. I strongly recommend reading their piece.
Wednesday, October 12, 2011
Occupy Wall Street, Boundaries, and Gratuitous Promotion of Family Members
My sister has a piece up at the Huffington Post discussing exactly how Occupy Wall Street lost her sympathy. She correctly points out actual problems with the Wall Street protesters, but I don't agree with what to her was the final point. She objects to the protesters deciding to protest the homes of the major executives, saying that they have a right to keep their private and public lives separate. I find this argument to be deeply unconvincing. When you are a major enough individual to be running a major corporation you have less of a right to privacy than a random individual. There might be an argument if these protests were directed at the homes of mid-level or upper level management. That argument doesn't apply to the CEOs of billion dollar corporations.
This is not a defense of the Occupy protesters in general. They aren't very coherent and those who have tried to state specific goals have given goals include goals that are unconsitutional, the immoral, the unethical, hopelessly naive, or just bad ideas. In that regard, they are essentially the left-wing equivalent of the Tea Party. It is possible that they will turn into something which does deal with the serious problems this country has, especially in regard to the massive income inequality which has become worse in the last few years but right now I'm not optimistic. The main thing that I would think needs to be done right now is getting the generic lower-middle class voter to understand that people like Herman Cain really have conflicting economic interests. There seems to be a certain class of economically badly off voters who somehow identify with the economic interests of people with incomes that are often an order of magnitude or more higher than their own.
As long as I'm pontificating about Occupy Wall Street, there are a few other things to note. First, whether or not one agrees with the protesters, the treatment of the protests by the police in some examples has been unacceptable. The mass arrest of protesters in Boston is a good example of this. Moreover, mistreating protesters is an easy way for people to build sympathy with a movement and come to agree with it whether or not the movement has any coherence or validity to their points. Second, using protesters behavior as evidence about economic policies is bad epistemology. This has lead to inane pieces like this one where various economic policies (some good, some bad) are justified simply by the existence of protesters. Protesters in this context are evidence of people unhappy with their current economic situation. Assuming that these people have any idea what to do about economic policy or that their existence can be easily traced to specific policies is unjustified.
In any event, my sister's piece is worth reading. She's not in the one percent, but she's not in the low percentages either. If OWS is going to succeed at anything they are going to need the people with average or moderately high incomes like my sister. Right now, they aren't doing that.
This is not a defense of the Occupy protesters in general. They aren't very coherent and those who have tried to state specific goals have given goals include goals that are unconsitutional, the immoral, the unethical, hopelessly naive, or just bad ideas. In that regard, they are essentially the left-wing equivalent of the Tea Party. It is possible that they will turn into something which does deal with the serious problems this country has, especially in regard to the massive income inequality which has become worse in the last few years but right now I'm not optimistic. The main thing that I would think needs to be done right now is getting the generic lower-middle class voter to understand that people like Herman Cain really have conflicting economic interests. There seems to be a certain class of economically badly off voters who somehow identify with the economic interests of people with incomes that are often an order of magnitude or more higher than their own.
As long as I'm pontificating about Occupy Wall Street, there are a few other things to note. First, whether or not one agrees with the protesters, the treatment of the protests by the police in some examples has been unacceptable. The mass arrest of protesters in Boston is a good example of this. Moreover, mistreating protesters is an easy way for people to build sympathy with a movement and come to agree with it whether or not the movement has any coherence or validity to their points. Second, using protesters behavior as evidence about economic policies is bad epistemology. This has lead to inane pieces like this one where various economic policies (some good, some bad) are justified simply by the existence of protesters. Protesters in this context are evidence of people unhappy with their current economic situation. Assuming that these people have any idea what to do about economic policy or that their existence can be easily traced to specific policies is unjustified.
In any event, my sister's piece is worth reading. She's not in the one percent, but she's not in the low percentages either. If OWS is going to succeed at anything they are going to need the people with average or moderately high incomes like my sister. Right now, they aren't doing that.
Monday, September 12, 2011
A brief note on non-transitive dice
I've talked before about non-transitive dice. We say that given a pair of dice X and Y, X beats Y if more than half the time when the pair is rolled X has a larger number face up than Y. It turns out one can construct dice A, B and C such that A beats B, B beats C, but C in fact beats A. This is a neat and weird property.
During a recent discussion I used non-transitive dice as an example of a counter-intuitive aspect of mathematics, I was pointed to an even weirder variant. Consider the following set of dice: A has sides (5,5,5,2,2,2), B has sides (4,4,4,4,4,1) and C has sides (6,3,3,3,3,3).
Here A beats B, B beats C and C beats A. But here's the really cool part: Let's say I roll two copies of A, two copies of B or two copies of C. Now things actually reverse! That is, a pair of Bs beats a pair of As and a pair of As beats a pair of Cs and a pair of Cs beats a pair of Bs.
This is a much more sensitive property than just non-transitive dice. Most sets of non-transitive dice will not have this property. We can also describe this sensitivity in a more rigorous fashion. Suppose we have a strictly increasing function f(x). That is, a function such that f(x) is greater than f(y) whenever x is greater than y. Now suppose we take a set of non-transitive dice and relable each value x with f(x). Then they will still be non-transitive. But, given a set of non-transitive, reversable dice, reversibility is not necessarily preserved by the f mapping. This reflects the much more sensitive nature of the reversible dice.
Here's a question I have so far been unable to answer: Is it possible to make a set of die which do an additional reversal? That is, is there a set of dices such rolling three copies the dice results in another reversal direction?
During a recent discussion I used non-transitive dice as an example of a counter-intuitive aspect of mathematics, I was pointed to an even weirder variant. Consider the following set of dice: A has sides (5,5,5,2,2,2), B has sides (4,4,4,4,4,1) and C has sides (6,3,3,3,3,3).
Here A beats B, B beats C and C beats A. But here's the really cool part: Let's say I roll two copies of A, two copies of B or two copies of C. Now things actually reverse! That is, a pair of Bs beats a pair of As and a pair of As beats a pair of Cs and a pair of Cs beats a pair of Bs.
This is a much more sensitive property than just non-transitive dice. Most sets of non-transitive dice will not have this property. We can also describe this sensitivity in a more rigorous fashion. Suppose we have a strictly increasing function f(x). That is, a function such that f(x) is greater than f(y) whenever x is greater than y. Now suppose we take a set of non-transitive dice and relable each value x with f(x). Then they will still be non-transitive. But, given a set of non-transitive, reversable dice, reversibility is not necessarily preserved by the f mapping. This reflects the much more sensitive nature of the reversible dice.
Here's a question I have so far been unable to answer: Is it possible to make a set of die which do an additional reversal? That is, is there a set of dices such rolling three copies the dice results in another reversal direction?
Thursday, September 1, 2011
Voluntary Taxation and Gratuitous Promotion of Family Members
It looks like the last of my siblings has no entered the blogosphere. My sister Jacoba has a piece up at the Huffington Post discussing the reaction to Warren Buffet's statements that the rich are not being taxed enough. She points out that if people think that they aren't being taxed enough then they can always just right a check to the US Treasury.
Her point is an interesting one but I think it is misguided: Very few rich people think like Buffet does. Almost everyone who is in favor of higher taxes thinks that the group who should be paying higher taxes are the people with income slightly above their own income. Moreover, even if a large number of rich people agreed with Buffet, if those people gave much more of their money to the federal government while others in their income bracket did not those volunteers would suffer a relative loss of income. There's a fair bit of evidence that people's sense of status and wealth is a function of the people around them. So if most of the rich aren't paying as much it is quite understandable that the other rich would not want to. It is thus reasonable for some high income people to call for higher taxes even as they don't make voluntarily payments. In any event, the idea is an interesting one and the piece is worth reading.
Her point is an interesting one but I think it is misguided: Very few rich people think like Buffet does. Almost everyone who is in favor of higher taxes thinks that the group who should be paying higher taxes are the people with income slightly above their own income. Moreover, even if a large number of rich people agreed with Buffet, if those people gave much more of their money to the federal government while others in their income bracket did not those volunteers would suffer a relative loss of income. There's a fair bit of evidence that people's sense of status and wealth is a function of the people around them. So if most of the rich aren't paying as much it is quite understandable that the other rich would not want to. It is thus reasonable for some high income people to call for higher taxes even as they don't make voluntarily payments. In any event, the idea is an interesting one and the piece is worth reading.
Tuesday, July 5, 2011
The Fermi Paradox, The Great Filter, and Existential Risk
The Fermi Paradox is a classic puzzle proposed by Enrico Fermi. Fermi observed that if one made back of the envelope calculations of the sort for which Fermi was famous, then one would expect to see much intelligent life out in space. Moreover, it doesn't take a society much more advanced than our own before one is likely to see direct evidence of its existence. So where is everyone?
One proposal to explain this apparent paradox is Robin Hanson's explanation that there is some "Great Filter" which culls species before they can reach the degree of civilization necessary to spread out to the stars on a large scale. . Various roadblocks and events can act as filters. For example, severe asteroid impacts every few million years set life back. However, that seems to be a rare and weak filtration effect. One obvious roadblock is the arrival of life itself. Life arising may be much more difficult than we expect, and thus life may be comparatively rare. But, life arose fairly early in this planet's history, rendering this claim unlikely.
The most disturbing possibility, and the one on which both Robin Hanson and Nick Bostrom have focused, is the possibility that, for us, most of the filter lies not in our past, but in our future. This is scary. Events which result in the complete destruction of humanity are described as existential risk. If such events lie in our future, they are not likely due to natural causes such as asteroid impact and gamma ray bursts, since such events are rare. Existential risk to us is more likely the result of dangerous technologies. In a similar vein, during the Cold War, Carl Sagan worried that the apparent absence of life in the universe might be due to every advanced society having nuked itself. In a post Cold War world, that particular worry seems to be less severe. However, Hanson and others have focused on other technologies, especially those arising from nanotechnology and rogue AI.
I am not that worried by the Great Filter. I suspect that the vast majority of Great Filter is behind us. One of the most obvious filtration points are the steps from a species being smart to that species having civilization capable of making sustained technological progress. On Earth, there are many extremely smart species that are almost as smart as humans. Lots of people know that other primates are smart and will name dolphins and elephants as other very species. But there are many others as well, especially birds. Keas, African Grey Parrots, and ravens are only three of the many examples. Almost every species of corvid is extremely bright, and is capable of puzzle solving that rivals that of human children. However, the steps from there to sustained civilization are clearly large. Only a single species developed language, and even after that point, we stagnated for hundreds of thousands of years before developing writing, which is when things really started to take off. So, it seems to me that we can plausibly point to a large filtration step just before the development of civilization.
There are other points which have been proposed as filtration points in the development of life as well. One common argument is the Rare Earth Hypothesis which posits that the existence and success of life on Earth required a large variety of different conditions. For example, Earth has a large moon which helps protect the planet from asteroid strikes. For most of the features frequently cited as part of Earth's rare nature we don't seem to have enough data at this point to reasonably judge how common such features are or how necessary they are for complex life. However, even neglecting the Rare Earth filtration effects, the pre-civilization filtration still seems large.
Moreover, many of exotic anthropogenic events can be safely ruled out as major aspects of the Great Filter. The most plausible anthropogenic events are rogue AIs, false vacuum collapse, bad nanotech, and severe environmental damage with accompanying loss of natural resources.
Rogue AIs are an unlikely scenario because it is unlikely that any AI would be bad enough to wipe out the creating species and then not quickly take large scale control over much of their surrounding space.[1] Thus, if societies are being destroyed by rogue AIs we should be able to see this. Moreover, we should exect our own solar system to have long since come under sway of such AI. Thus, we can safely rule out rogue AI as a major part of the filter.
Similarly, some physicists have proposed that space as we know it is a "false vacuum". While the technical details are complicated, the essential worry is that a sufficiently advanced particle accelerator or similar device could cause space as we know it to be replaced by space that behaves fundamentally differently than what we are used to. The new space would expand at the speed of light.
We don't need to worry about civilizations probing the nature of space to cause a collapse of the false vacuum. If there were a lot of civilizations doing this, we wouldn't be here to notice. It is remotely plausible that the new vacuum would expand slower than the speed of light. If for example, the new type of vacuum expanded at a millionth of the speed of light, that would be enough to quickly destroy any single-planet civilization that triggered such an event, but would be slow enough to take a very long time to spread before it became noticed by another civilizations. However, our current understanding of the laws of physics make it hard to see how a vacuum collapse could occur at less than the speed of light. So we can rule this out as a major part of the Great Filter.
Nanotechnology is one of the most plausible options for a section of the Great Filter in front of us for the simple reason that severe nanotech events don't create results that will destroy or alter nearby stars or the like. While there are a variety of nanotech disaster scenarios, they essentially revolve around some form of out of control replicator consuming resources that humans need to survive or disrupting the ecosystem so much that we cannot survive. If a nearby solar system had a severe nanotech disaster, we wouldn't be able to tell. This situation is similar to Sagan's nuclear war scenario in that it allows civilizations to frequently wipe themselves out in a way that we can't easily observe.
Environmental damage and overconsumption of resources is another possible problem. It is possible that species exhaust their basic resources before they become technologically advanced. If, for example, humanity ran out of all fossil fuels without adequate replacements, this could prevent further expansion. However, this seems to be an unlikely explanation for Fermi's paradox. Even extreme resource consumption and environmental damage is unlikely to result in the complete destruction of an intelligent species. This possibility is the modern equivalent of the Sagan concern about nuclear war, a possibility which gets undue attention due to the current political climate.
So, it seems likely that most of the Great Filter is behind us. However, this is not a cause for complacency. First, the argument that the Great Filter is behind us is a weak one. As long as our sample of civilizations remains a single civilization, we cannot do more than make very rough estimates. Moreover, even if most of the Great Filter is behind us, that doesn't imply that we are necessarily paying enough attention to existential risk. Even back of the envelope calculations suggest that we aren't putting enough resources into dealing with existential risk threats, whether natural or caused by humans.
What needs to be done? First, we need to get a better idea where filtration steps actually exist. The most obvious way to do that is to look for life on other planets. If we don't find any life on other bodies in the solar system, then that increases the chance that a large part of the filtration is overcome by life arising and so we can breathe more easily. If however, we find life elsewhere, especially complex life, this gives us increased reason to think that the filter is ahead of us.
Second, we need to put more resources into dealing with existential risks. One excellent recent step was NASA's WISE mission which looked for asteroids likely to impact the Earth. We're now tracking a lot more of the near Earth asteroids and are probably tracking all of the asteroids that are both large and likely to intersect Earth orbit. At present, we're paying very little attention to human-caused catastrophic risk events. Catastrophic AI seems unlikely, but it is clear that little attention is being paid to the issue. Similar observations apply to nanotech and other concerns. More resources should be devoted to examining these dangers before the technologies become fully developed by which time it may be too late.
Unfortunately, there's a tendency to dismiss risks that appear in popular science fiction precisely because they appear in such works. This is just as bad as using fictional works as a reason to eschew a technology. Moreover, humans have a lot of trouble thinking about large scale problems, and the scale of a problem doesn't get much larger than the complete destruction of humanity.
So overall, the Great Filter doesn't worry me too much. But, even without the threat of the Great Filter, we still aren't doing enough to deal with the big risks to our existence. If most of the Great Filter is behind us, it would be all the more tragic if humanity were to be destroyed now, when we are but a few generations of spreading beyond our planet.
[1] I thought that this point might be original to me, but while writing this blog entry I found that it has been made before. See, e.g. Katja Grace's remarks here.
One proposal to explain this apparent paradox is Robin Hanson's explanation that there is some "Great Filter" which culls species before they can reach the degree of civilization necessary to spread out to the stars on a large scale. . Various roadblocks and events can act as filters. For example, severe asteroid impacts every few million years set life back. However, that seems to be a rare and weak filtration effect. One obvious roadblock is the arrival of life itself. Life arising may be much more difficult than we expect, and thus life may be comparatively rare. But, life arose fairly early in this planet's history, rendering this claim unlikely.
The most disturbing possibility, and the one on which both Robin Hanson and Nick Bostrom have focused, is the possibility that, for us, most of the filter lies not in our past, but in our future. This is scary. Events which result in the complete destruction of humanity are described as existential risk. If such events lie in our future, they are not likely due to natural causes such as asteroid impact and gamma ray bursts, since such events are rare. Existential risk to us is more likely the result of dangerous technologies. In a similar vein, during the Cold War, Carl Sagan worried that the apparent absence of life in the universe might be due to every advanced society having nuked itself. In a post Cold War world, that particular worry seems to be less severe. However, Hanson and others have focused on other technologies, especially those arising from nanotechnology and rogue AI.
I am not that worried by the Great Filter. I suspect that the vast majority of Great Filter is behind us. One of the most obvious filtration points are the steps from a species being smart to that species having civilization capable of making sustained technological progress. On Earth, there are many extremely smart species that are almost as smart as humans. Lots of people know that other primates are smart and will name dolphins and elephants as other very species. But there are many others as well, especially birds. Keas, African Grey Parrots, and ravens are only three of the many examples. Almost every species of corvid is extremely bright, and is capable of puzzle solving that rivals that of human children. However, the steps from there to sustained civilization are clearly large. Only a single species developed language, and even after that point, we stagnated for hundreds of thousands of years before developing writing, which is when things really started to take off. So, it seems to me that we can plausibly point to a large filtration step just before the development of civilization.
There are other points which have been proposed as filtration points in the development of life as well. One common argument is the Rare Earth Hypothesis which posits that the existence and success of life on Earth required a large variety of different conditions. For example, Earth has a large moon which helps protect the planet from asteroid strikes. For most of the features frequently cited as part of Earth's rare nature we don't seem to have enough data at this point to reasonably judge how common such features are or how necessary they are for complex life. However, even neglecting the Rare Earth filtration effects, the pre-civilization filtration still seems large.
Moreover, many of exotic anthropogenic events can be safely ruled out as major aspects of the Great Filter. The most plausible anthropogenic events are rogue AIs, false vacuum collapse, bad nanotech, and severe environmental damage with accompanying loss of natural resources.
Rogue AIs are an unlikely scenario because it is unlikely that any AI would be bad enough to wipe out the creating species and then not quickly take large scale control over much of their surrounding space.[1] Thus, if societies are being destroyed by rogue AIs we should be able to see this. Moreover, we should exect our own solar system to have long since come under sway of such AI. Thus, we can safely rule out rogue AI as a major part of the filter.
Similarly, some physicists have proposed that space as we know it is a "false vacuum". While the technical details are complicated, the essential worry is that a sufficiently advanced particle accelerator or similar device could cause space as we know it to be replaced by space that behaves fundamentally differently than what we are used to. The new space would expand at the speed of light.
We don't need to worry about civilizations probing the nature of space to cause a collapse of the false vacuum. If there were a lot of civilizations doing this, we wouldn't be here to notice. It is remotely plausible that the new vacuum would expand slower than the speed of light. If for example, the new type of vacuum expanded at a millionth of the speed of light, that would be enough to quickly destroy any single-planet civilization that triggered such an event, but would be slow enough to take a very long time to spread before it became noticed by another civilizations. However, our current understanding of the laws of physics make it hard to see how a vacuum collapse could occur at less than the speed of light. So we can rule this out as a major part of the Great Filter.
Nanotechnology is one of the most plausible options for a section of the Great Filter in front of us for the simple reason that severe nanotech events don't create results that will destroy or alter nearby stars or the like. While there are a variety of nanotech disaster scenarios, they essentially revolve around some form of out of control replicator consuming resources that humans need to survive or disrupting the ecosystem so much that we cannot survive. If a nearby solar system had a severe nanotech disaster, we wouldn't be able to tell. This situation is similar to Sagan's nuclear war scenario in that it allows civilizations to frequently wipe themselves out in a way that we can't easily observe.
Environmental damage and overconsumption of resources is another possible problem. It is possible that species exhaust their basic resources before they become technologically advanced. If, for example, humanity ran out of all fossil fuels without adequate replacements, this could prevent further expansion. However, this seems to be an unlikely explanation for Fermi's paradox. Even extreme resource consumption and environmental damage is unlikely to result in the complete destruction of an intelligent species. This possibility is the modern equivalent of the Sagan concern about nuclear war, a possibility which gets undue attention due to the current political climate.
So, it seems likely that most of the Great Filter is behind us. However, this is not a cause for complacency. First, the argument that the Great Filter is behind us is a weak one. As long as our sample of civilizations remains a single civilization, we cannot do more than make very rough estimates. Moreover, even if most of the Great Filter is behind us, that doesn't imply that we are necessarily paying enough attention to existential risk. Even back of the envelope calculations suggest that we aren't putting enough resources into dealing with existential risk threats, whether natural or caused by humans.
What needs to be done? First, we need to get a better idea where filtration steps actually exist. The most obvious way to do that is to look for life on other planets. If we don't find any life on other bodies in the solar system, then that increases the chance that a large part of the filtration is overcome by life arising and so we can breathe more easily. If however, we find life elsewhere, especially complex life, this gives us increased reason to think that the filter is ahead of us.
Second, we need to put more resources into dealing with existential risks. One excellent recent step was NASA's WISE mission which looked for asteroids likely to impact the Earth. We're now tracking a lot more of the near Earth asteroids and are probably tracking all of the asteroids that are both large and likely to intersect Earth orbit. At present, we're paying very little attention to human-caused catastrophic risk events. Catastrophic AI seems unlikely, but it is clear that little attention is being paid to the issue. Similar observations apply to nanotech and other concerns. More resources should be devoted to examining these dangers before the technologies become fully developed by which time it may be too late.
Unfortunately, there's a tendency to dismiss risks that appear in popular science fiction precisely because they appear in such works. This is just as bad as using fictional works as a reason to eschew a technology. Moreover, humans have a lot of trouble thinking about large scale problems, and the scale of a problem doesn't get much larger than the complete destruction of humanity.
So overall, the Great Filter doesn't worry me too much. But, even without the threat of the Great Filter, we still aren't doing enough to deal with the big risks to our existence. If most of the Great Filter is behind us, it would be all the more tragic if humanity were to be destroyed now, when we are but a few generations of spreading beyond our planet.
[1] I thought that this point might be original to me, but while writing this blog entry I found that it has been made before. See, e.g. Katja Grace's remarks here.
Labels:
Enrico Fermi,
Fermi Paradox,
Nick Bostrom,
politics,
Robin Hanson,
science
Wednesday, June 29, 2011
Why I am a feminist.
I self-identify as a feminist.I think human males in general should self-identify as feminists. If we want to live in a society that is as technologically and scientifically advanced as can be, we must support feminism.
By "feminist," I don't mean that someone who believes that males and females are identical. Obviously they aren't. And I don't mean that that there aren't innate biological differences, some of which will result in statistically significant differences in the general population. Such differences indeed exist.
By feminist I mean someone who supports identical rights for people regardless of gender. I also believe that we should encourage females to pursue whatever professions, occupations and hobbies that they want and of which they are capable and that we should discourage negative stereotyping about lack of ability.
I don't self-identify as a feminist out of some deep ideological grounding. I don't have any strong ideological affinity with most of the feminist movement. Sure, equality in the abstract is nice. However, it is clear that stereotypes about females negatively impact on everyone. And so, I support feminism, not out of some deep belief, but out of simple self-interest.
How many women have become housewives or secretaries who might otherwise have been the next Barbara McClintock or Emmy Noether but for the fact that they were mistreated and told that math and science were for men? I don't know. But I do know that, for each of the women who were discouraged, there is at least one more interesting theorem, one more cool biological fact, one more interesting astronomical phenomenon that society missed out on. And some of those would have gone on, not just make interesting discoveries but to make practical, helpful discoveries. I have trouble keeping count how many friends and relatives I've lost due to cancer and other illnesses. How many of them would still be alive today if the right little girl hadn't been told that she couldn't do math or that science was for the boys? I don't know, but I can guess that it is probably more than one.
I'm a feminist not because I'm a good person who cares about equality, but because I'm a self-interested person who wants to learn and benefit from everyone I can. I want to live in the best, most technologically advanced society that I can. Therefore, I am a feminist.
By "feminist," I don't mean that someone who believes that males and females are identical. Obviously they aren't. And I don't mean that that there aren't innate biological differences, some of which will result in statistically significant differences in the general population. Such differences indeed exist.
By feminist I mean someone who supports identical rights for people regardless of gender. I also believe that we should encourage females to pursue whatever professions, occupations and hobbies that they want and of which they are capable and that we should discourage negative stereotyping about lack of ability.
I don't self-identify as a feminist out of some deep ideological grounding. I don't have any strong ideological affinity with most of the feminist movement. Sure, equality in the abstract is nice. However, it is clear that stereotypes about females negatively impact on everyone. And so, I support feminism, not out of some deep belief, but out of simple self-interest.
How many women have become housewives or secretaries who might otherwise have been the next Barbara McClintock or Emmy Noether but for the fact that they were mistreated and told that math and science were for men? I don't know. But I do know that, for each of the women who were discouraged, there is at least one more interesting theorem, one more cool biological fact, one more interesting astronomical phenomenon that society missed out on. And some of those would have gone on, not just make interesting discoveries but to make practical, helpful discoveries. I have trouble keeping count how many friends and relatives I've lost due to cancer and other illnesses. How many of them would still be alive today if the right little girl hadn't been told that she couldn't do math or that science was for the boys? I don't know, but I can guess that it is probably more than one.
I'm a feminist not because I'm a good person who cares about equality, but because I'm a self-interested person who wants to learn and benefit from everyone I can. I want to live in the best, most technologically advanced society that I can. Therefore, I am a feminist.
Tuesday, June 28, 2011
What is P ?= NP and why should you care?
After my last blog post, readers remarked that they had no idea what
"P ?= NP" was and that I had completely failed to explain what I was talking about. They are correct. This entry is to remedy that problem.
First an analogy: Suppose you have a jigsaw puzzle. Assembling the puzzle correctly is difficult. But, if someone has the puzzle completed, one can generally tell at a glance that the puzzle is correct. (This isn't strictly true if you are dealing with someone like me who as a little kid would sometimes shove the pieces to fit in where they didn't belong. If one does this to sections that depict sky and clouds, it can be quite hard to tell that it is wrong.) But it seems that it is very difficult to tell how to assemble a puzzle. Moreover, it isn't even clear if one has all the pieces of the puzzle until one is nearly done with the puzzle. It can be a very frustrating experience to be nearly done with a puzzle and then realize that pieces are missing. What if there were a way to tell just from looking at the jumbled pieces in the box if one had the all pieces? Whether P is equal to NP is essentially asking if this is possible. The general consensus among computer scientists is that P is not equal to NP which means that puzzle-solvers are out of luck.
Let's break this down further. P is the set of types of puzzles that can be essentially solved quickly. So for example, "Is a given integer even or odd?" is a problem that lies in P. Given an integer, one can just look at the leading digit and see if it is even or odd. (I'm assuming that our integers are written in base 10). Similarly, the problem "given integers a and b, does a divide b?" is in P,;all one needs to do is divide a into b and see if there is a remainder. But not everything is necessarily in P. For example, the question, "is a given integer composite?" is not obviously in P, since checking requires successive search for divisors.
But, there is a larger class of problems, NP, which are problems which, if they have an answer of "yes," one can quickly convince someone else of the answer if one has the right information. In this context, "is an integer composite?" is in NP, since if I know a non-trivial divisor of the integer, and I want to convince you that it is composite. I can just tell you that number and you can check yourself. To make this more concrete, if I gave you the number 1517 and asked you if it were prime or composite, you would have to do a lot of unpleasant arithmetic. But, if told you 1517 is 37 times 41, you'd be able to check this easily.
Now, it actually turns out that whether a number is prime composite can actually be checked quickly using something called the AKS algorithm . This result was a very big deal, proven by Manindra Agrawal and two other mathematicians in 2002 . I remember when this happened. When the result was announced, I was a high school student who was then at PROMYS, a summer math program at Boston University, and people were distributing and looking over copies of the preprint. The algorithm they used was straightforward, but proving that algorithm really did what was claimed required deeper mathematical results from the area known as sieve theory.
Let's try to make these notions of P and NP more mathematically rigorous. To do that, we need a rigorous notion of what it means to calculate quickly. We want the length of time it takes to run our algorithims to solve our problems to not grow very fast. When mathematicians want something that doesn't grow too quickly, they often look at polynomials (that is, functions like n^2, or n^3, or n^4 +10n +2, as opposed to say exponentials that look like 2^n or 3^n). So, we will define an algorithm to be quick if the amount of time it takes to run is bounded by a polynomial of the length of whatever we inputted. So ,for example, in our earlier case of looking at whether an integer is prime or composite, the length of the input would be the number of digits in the integer. Thus, Agrawal constructed an algorithm that ran would when given an integer always tell you if the integer was prime or composite. Agrawal showed that there is a constant K such that his algorithm terminates in at most at most Kn^12 steps, where n is the number of digits of of the number to be test.
Whether P = NP is one of the great unsolved problems of modern mathematics. P ?= NP is one the seven Clay Millenium Problems, each of which has a million dollar prize. Moreover, there are many practical applications. There are many practical problems which appear in some form to be in NP, but have no known quick algorithm to solve them. These include problems in protein folding, circuit design, and others areas.
Readers have likely also benefited more directly from practical issues related to this problem without even realizing it. I've discussed on this blog before the Diffie-Hellman algorithm. This is one example of many cryptographic systems that are used by modern computer networks, most often without users even realizing that they are being used. These all rest on certain problems being easy to solve if one has extra information, but very difficult otherwise. Thus, if P = NP, then all these algorithms will become insecure. This would be bad. Unfortunately, the converse does not follow: There is a common misconception that if P is not equal to NP, then modern encryption is actually safe. This doesn't follow. It turns out that claims that encryption works implies that P is not equal to NP, but the reverse doesn't follow. It is conceivable (although unlikely) that P and NP are distinct and encryption still collapses. But, figuring out whether P = NP would be a major step in understanding whether or not encryption is really safe.
Those worried about the encryption for their bank accounts can rest easy. Most experts in the field seem to think that P and NP are distinct. This means your encryption is secure. On the other hand, this also means that a lot of practical problems are genuinely tough. One can't have it both ways. Either one gets lots of helpful fast algorithms or one gets good encryption. Right now, it looks like one gets good encryption.
So, how close are we to actually answering if P = NP? This looks pretty far off right now. There are a few methods that look somewhat promising, but a lot of the best current results are results that essentially show that certain techniques cannot answer the question. So right now resolving the question looks pretty far off.
"P ?= NP" was and that I had completely failed to explain what I was talking about. They are correct. This entry is to remedy that problem.
First an analogy: Suppose you have a jigsaw puzzle. Assembling the puzzle correctly is difficult. But, if someone has the puzzle completed, one can generally tell at a glance that the puzzle is correct. (This isn't strictly true if you are dealing with someone like me who as a little kid would sometimes shove the pieces to fit in where they didn't belong. If one does this to sections that depict sky and clouds, it can be quite hard to tell that it is wrong.) But it seems that it is very difficult to tell how to assemble a puzzle. Moreover, it isn't even clear if one has all the pieces of the puzzle until one is nearly done with the puzzle. It can be a very frustrating experience to be nearly done with a puzzle and then realize that pieces are missing. What if there were a way to tell just from looking at the jumbled pieces in the box if one had the all pieces? Whether P is equal to NP is essentially asking if this is possible. The general consensus among computer scientists is that P is not equal to NP which means that puzzle-solvers are out of luck.
Let's break this down further. P is the set of types of puzzles that can be essentially solved quickly. So for example, "Is a given integer even or odd?" is a problem that lies in P. Given an integer, one can just look at the leading digit and see if it is even or odd. (I'm assuming that our integers are written in base 10). Similarly, the problem "given integers a and b, does a divide b?" is in P,;all one needs to do is divide a into b and see if there is a remainder. But not everything is necessarily in P. For example, the question, "is a given integer composite?" is not obviously in P, since checking requires successive search for divisors.
But, there is a larger class of problems, NP, which are problems which, if they have an answer of "yes," one can quickly convince someone else of the answer if one has the right information. In this context, "is an integer composite?" is in NP, since if I know a non-trivial divisor of the integer, and I want to convince you that it is composite. I can just tell you that number and you can check yourself. To make this more concrete, if I gave you the number 1517 and asked you if it were prime or composite, you would have to do a lot of unpleasant arithmetic. But, if told you 1517 is 37 times 41, you'd be able to check this easily.
Now, it actually turns out that whether a number is prime composite can actually be checked quickly using something called the AKS algorithm . This result was a very big deal, proven by Manindra Agrawal and two other mathematicians in 2002 . I remember when this happened. When the result was announced, I was a high school student who was then at PROMYS, a summer math program at Boston University, and people were distributing and looking over copies of the preprint. The algorithm they used was straightforward, but proving that algorithm really did what was claimed required deeper mathematical results from the area known as sieve theory.
Let's try to make these notions of P and NP more mathematically rigorous. To do that, we need a rigorous notion of what it means to calculate quickly. We want the length of time it takes to run our algorithims to solve our problems to not grow very fast. When mathematicians want something that doesn't grow too quickly, they often look at polynomials (that is, functions like n^2, or n^3, or n^4 +10n +2, as opposed to say exponentials that look like 2^n or 3^n). So, we will define an algorithm to be quick if the amount of time it takes to run is bounded by a polynomial of the length of whatever we inputted. So ,for example, in our earlier case of looking at whether an integer is prime or composite, the length of the input would be the number of digits in the integer. Thus, Agrawal constructed an algorithm that ran would when given an integer always tell you if the integer was prime or composite. Agrawal showed that there is a constant K such that his algorithm terminates in at most at most Kn^12 steps, where n is the number of digits of of the number to be test.
Whether P = NP is one of the great unsolved problems of modern mathematics. P ?= NP is one the seven Clay Millenium Problems, each of which has a million dollar prize. Moreover, there are many practical applications. There are many practical problems which appear in some form to be in NP, but have no known quick algorithm to solve them. These include problems in protein folding, circuit design, and others areas.
Readers have likely also benefited more directly from practical issues related to this problem without even realizing it. I've discussed on this blog before the Diffie-Hellman algorithm. This is one example of many cryptographic systems that are used by modern computer networks, most often without users even realizing that they are being used. These all rest on certain problems being easy to solve if one has extra information, but very difficult otherwise. Thus, if P = NP, then all these algorithms will become insecure. This would be bad. Unfortunately, the converse does not follow: There is a common misconception that if P is not equal to NP, then modern encryption is actually safe. This doesn't follow. It turns out that claims that encryption works implies that P is not equal to NP, but the reverse doesn't follow. It is conceivable (although unlikely) that P and NP are distinct and encryption still collapses. But, figuring out whether P = NP would be a major step in understanding whether or not encryption is really safe.
Those worried about the encryption for their bank accounts can rest easy. Most experts in the field seem to think that P and NP are distinct. This means your encryption is secure. On the other hand, this also means that a lot of practical problems are genuinely tough. One can't have it both ways. Either one gets lots of helpful fast algorithms or one gets good encryption. Right now, it looks like one gets good encryption.
So, how close are we to actually answering if P = NP? This looks pretty far off right now. There are a few methods that look somewhat promising, but a lot of the best current results are results that essentially show that certain techniques cannot answer the question. So right now resolving the question looks pretty far off.
Sunday, June 26, 2011
Gasarch P = NP Poll
A decade ago, Bill Gasarch conducted an informal poll asking computer scientists and mathematicians various questions related to whether or not P was equal to NP. He asked when people thought the problem would be resolved, which was it would be resolved, and what techniques they thought would be used. One thing that is striking about Gasarch's data is that a surprisingly large fraction of serious computer scientists seem to think that P = NP. Moreover, while Gasarch notes that some of those individuals explicitly said they were saying it for the sake of being contrary, a large fraction also were simply unwilling to guess which way it would be resolved.
Now, Gasarch is again conducting such a poll. I am noting this here, because he is accepting emails from not just computer scientists but individuals in general, although he wants people to note their academic background. Also, please note that he wants replies by email. Apparently some people have already failed to note this and have added them as comments to his announcement post.
My own opinion (which I will submit via email shortly), is that P != NP, and I'm about 95% confident in this regard. I assign low probability to the weirder possible results involving undecidability in ZFC. I have no idea when the problem will be resolved, and I have no idea what techniques will be used to resolve it, although the techniques used by both Mulmuley and Ryan Williams look interesting. Obviously, I'm not a computer scientist, so my opinions on these matters should be taken with a large dose of salt.
Gasarch's poll also has an open-ended question allowing one to pontificate on related issues. Unfortunately, I really don't have any strong opinion on any related issues other than something like "derandomization is nice, yay?" The obvious related question of whether P = BPP seems tough. A lot of people are convinced that this is the case, and there's been a lot of success in the last few years with derandomizing algorithms, but my impression is that there's very little in the way of general techniques of how to do this systematically.
I'm also curious what readers of this blog thing, so feel free to leave your speculations in the comments.
Now, Gasarch is again conducting such a poll. I am noting this here, because he is accepting emails from not just computer scientists but individuals in general, although he wants people to note their academic background. Also, please note that he wants replies by email. Apparently some people have already failed to note this and have added them as comments to his announcement post.
My own opinion (which I will submit via email shortly), is that P != NP, and I'm about 95% confident in this regard. I assign low probability to the weirder possible results involving undecidability in ZFC. I have no idea when the problem will be resolved, and I have no idea what techniques will be used to resolve it, although the techniques used by both Mulmuley and Ryan Williams look interesting. Obviously, I'm not a computer scientist, so my opinions on these matters should be taken with a large dose of salt.
Gasarch's poll also has an open-ended question allowing one to pontificate on related issues. Unfortunately, I really don't have any strong opinion on any related issues other than something like "derandomization is nice, yay?" The obvious related question of whether P = BPP seems tough. A lot of people are convinced that this is the case, and there's been a lot of success in the last few years with derandomizing algorithms, but my impression is that there's very little in the way of general techniques of how to do this systematically.
I'm also curious what readers of this blog thing, so feel free to leave your speculations in the comments.
Monday, June 20, 2011
Planar Divisibility Graphs and The Bible
I've talked about graph theory here before in the context of Ramsey theory.
However, there are many other interesting graph theory problems.
A graph is said to be planar if one can draw it on a plane with none of the edges intersecting. So for example, K3, the graph formed by three vertices all of which connect to each other is planar, but the similar graph K5 formed by five vertices all of which connect is not planar.
We can also change our notion of graph by allowing our edges to have directions, and represent them with arrows rather than straight lines. Thus for example, if one wanted to use a graph to represent who knows the name of whom with a bunch of people, using a directed graph would be quite natural.
Over the last few days I have been thinking about divisibility graphs of sets. These graphs arise when one takes some set of positive integers and assigns each to a vertex then draws the corresponding arrows when one integer divides another. (So for example, if our set was was {1,2,4} then 1 would have arrows going to 2 and 4 and 2 would have an arrow going to 4). For convenience, I am ignoring the arrows that vertices would have going to themselves.
Now, assume one has a set of positive integers A and we know that corresponding divisibility graph is planar. What can we say about how large the set is. That is, if we let A(x) be the number of elements in A which are at most x, how fast does A(x) grow? It is not difficult to see that one can get A(x) to grow at least about as fast as x/log x. One does this by taking A to be the set of prime numbers. The resulting graph is certainly planar since it has no edges at all. Then the prime number theorem does the rest. With a small amount of tweaking, one can get this to grow at about 2x/log x since one can include all the numbers of the form 2p and still get a planar graph. I suspect that the actual best possible growth is on the order of x/log x but I'm not sure. One possible approach to making a large planar divisibility graph is to use the greedy algorithm. That is, throw 1 into the set and then go by induction on each integer, throwing in the next integer if it still allows a planar graph. If one call this set G, then the first number not in G is 18. It seems at first that G grows quickly, and G includes every prime number. But most large integers are in fact not in G, a result of the fact that most large integers have a lot of prime factors. For example, every multiple of 6 other than 6 and 12 is not in G.
Now, you may be thinking, "Josh, this is an interesting math problem, but the title mentioned the Bible. What does that have to do with anything?" The truth is that the connection is tenuous. The problem about planar divisibility graphs occurred to me when I was tutoring a professor's young kid in graph theory, and we discussed divisibility graphs. The professor's family is Orthodox, and so another graph we talked was to take different Biblical figures and make a graph representing who had met whom. The major graph had three large components, one corresponding to the patriarchal time period (with Abraham, Issac and Jacob as the most connected points), one to the time around the Exodus (with Moses at the center), and one at the early monarchy, with David, Samuel and Solomon as the main points. However, an issue came up. My young student wanted to add Eli, the high priest during most of the of Samuel to the graph. This raised an issue which neither he nor I knew the answer: Did Eli ever encounter David? The text does not mention such an event, but the chronology seems tentatively to allow such a meeting. I'm also unaware of any midrashim claiming that they met. I'm mentioning this here therefore for two reasons: One can any more knowledgeable readers point me to anything in the text itself which deals with this, or can any of my more midrashically inclined readers point me to any midrashim that address whether they met?
However, there are many other interesting graph theory problems.
A graph is said to be planar if one can draw it on a plane with none of the edges intersecting. So for example, K3, the graph formed by three vertices all of which connect to each other is planar, but the similar graph K5 formed by five vertices all of which connect is not planar.
We can also change our notion of graph by allowing our edges to have directions, and represent them with arrows rather than straight lines. Thus for example, if one wanted to use a graph to represent who knows the name of whom with a bunch of people, using a directed graph would be quite natural.
Over the last few days I have been thinking about divisibility graphs of sets. These graphs arise when one takes some set of positive integers and assigns each to a vertex then draws the corresponding arrows when one integer divides another. (So for example, if our set was was {1,2,4} then 1 would have arrows going to 2 and 4 and 2 would have an arrow going to 4). For convenience, I am ignoring the arrows that vertices would have going to themselves.
Now, assume one has a set of positive integers A and we know that corresponding divisibility graph is planar. What can we say about how large the set is. That is, if we let A(x) be the number of elements in A which are at most x, how fast does A(x) grow? It is not difficult to see that one can get A(x) to grow at least about as fast as x/log x. One does this by taking A to be the set of prime numbers. The resulting graph is certainly planar since it has no edges at all. Then the prime number theorem does the rest. With a small amount of tweaking, one can get this to grow at about 2x/log x since one can include all the numbers of the form 2p and still get a planar graph. I suspect that the actual best possible growth is on the order of x/log x but I'm not sure. One possible approach to making a large planar divisibility graph is to use the greedy algorithm. That is, throw 1 into the set and then go by induction on each integer, throwing in the next integer if it still allows a planar graph. If one call this set G, then the first number not in G is 18. It seems at first that G grows quickly, and G includes every prime number. But most large integers are in fact not in G, a result of the fact that most large integers have a lot of prime factors. For example, every multiple of 6 other than 6 and 12 is not in G.
Now, you may be thinking, "Josh, this is an interesting math problem, but the title mentioned the Bible. What does that have to do with anything?" The truth is that the connection is tenuous. The problem about planar divisibility graphs occurred to me when I was tutoring a professor's young kid in graph theory, and we discussed divisibility graphs. The professor's family is Orthodox, and so another graph we talked was to take different Biblical figures and make a graph representing who had met whom. The major graph had three large components, one corresponding to the patriarchal time period (with Abraham, Issac and Jacob as the most connected points), one to the time around the Exodus (with Moses at the center), and one at the early monarchy, with David, Samuel and Solomon as the main points. However, an issue came up. My young student wanted to add Eli, the high priest during most of the of Samuel to the graph. This raised an issue which neither he nor I knew the answer: Did Eli ever encounter David? The text does not mention such an event, but the chronology seems tentatively to allow such a meeting. I'm also unaware of any midrashim claiming that they met. I'm mentioning this here therefore for two reasons: One can any more knowledgeable readers point me to anything in the text itself which deals with this, or can any of my more midrashically inclined readers point me to any midrashim that address whether they met?
Thursday, June 16, 2011
Abusing Statistics
A friend writing under a pseudonym has a new blog about the use and abuse of statistics in the media. He's a good writer. Since he came to statistics (and the mathy end of everything) somewhat late compared to most math and science bloggers, I expect that his take will be quite interesting. He will likely also do a good job explaining some of the relevant ideas to less mathematical people. The latest entry is on how not to conduct surveys. The entry is worth reading.
Monday, June 6, 2011
Dungeons, Dragons and Halacha
In an earlier entry I discussed whether under halachah (Orthodox Jewish law) it would be acceptable to make a horcrux or become a lich if either were possible in real life. That entry was largely an excuse for bad wordplay related to the word "phylactery" which has a variety of meanings. A phylactery is in the most general sense an object which contains someting of religious or ritual significance. In the most common context, the word is used in the plural- "phylacteries" as an English translation of tefilin, the small boxes worn by some Jews at morning prayers. Another use of term is in Dungeons and Dragons, where the term is used to describe the object that a lich, a type of undead wizard, uses to store their soul.
However, I recently came across yet another meaning of this term in a Dungeons and Dragons context. There is a spell, described in the D&D book "Player's Guide to Faerun" called "Spell Phylactery" which allows one to store a spell on a scroll which "must be bound to your arm or forehead (usually rolled tightly or placed in a small box for this purpose)". This form seems more directly inspired by the phylacteries of the Jewish tradition. Unfortunately, even if D&D magic were real, it would not be halachically acceptable to make a three-way phylactery since the Spell Phylactery spell can only be cast by a worshipper of the goddess Mystra, which would be not allowed under halacha. Too bad. I really wanted phylacteries that functioned as both a phylactery and as a spell phylactery.
However, I recently came across yet another meaning of this term in a Dungeons and Dragons context. There is a spell, described in the D&D book "Player's Guide to Faerun" called "Spell Phylactery" which allows one to store a spell on a scroll which "must be bound to your arm or forehead (usually rolled tightly or placed in a small box for this purpose)". This form seems more directly inspired by the phylacteries of the Jewish tradition. Unfortunately, even if D&D magic were real, it would not be halachically acceptable to make a three-way phylactery since the Spell Phylactery spell can only be cast by a worshipper of the goddess Mystra, which would be not allowed under halacha. Too bad. I really wanted phylacteries that functioned as both a phylactery and as a spell phylactery.
Sunday, May 29, 2011
Harry Potter and the Methods of Rationality
I've mentioned Harry Potter and the Methods of Rationality before. It is a Harry Potter fanfiction written by Eliezer Yudkowsky. The central premise of the work is that Harry instead of having abusive step-parents has loving step-parents and his step-father is a scientist. Young Harry grows up learning all about the scientific method, critical thinking, and cognitive biases. HPMR does have its positives and negatives. Overall it is hilarious but there are times when Harry is didactic and Yudkowsky has clear difficulty in making his characters sound like eleven year olds. But overall, it is worth reading. I am recommending the fiction now for two reasons. The fiction has recently become the most reviewed fanfiction on fanfiction.net. Second, for the last Vericon masquerade a friend and I cosplayed as the versions of Harry and Hermione from HPMR. Pictures can be found at her blog. Note that the costumes were not made entierly by us. The badges were made by Ellen Dimiduk who does excellent costuming work. Now, Yudkowksy has a policy that people who make cool artwork about the story get cameos in the story. So the latest chapter of HPMR mentions two Hogwarts students, Katarina and Joshua, who helped make costumes for Hogwarts students. So of course people need to go read it now since I'm a character! So if you aren't reading it yet, go and read.
Friday, May 20, 2011
A brief note on the Rapture
Michael Hartell of the Sentinel and Enterprise interviewed me in my capacity as a spokesentity for the Boston Skeptics talking about the Rapture. Hartell's article focuses on Harold Camping's prediction that the Rapture will take place tomorrow. His article is worth reading, although there are a few things that didn't get into the final article that I think are worth mentioning: First, the entire "Rapture" doctrine as it exists in modern times is only a few centuries old and only became at all popular due to the preaching of John Darby in the early part of the 19th century. Second, this is a good example of the sort of serious damage that erroneous beliefs can create. The New York Times article on the same subject focuses on the Haddad family where the parents believe the Rapture will occur tomorrow and the children do not. In that article, the Haddads have stopped saving up for college for their children because they believe that it will never happen. The children will suffer when the Rapture doesn't take place and they then can't afford to get good educations. These parents are not going to risk their childrens' lives in the same way that parents who refuse to vaccinate are actively endangering their lives, but the basic problem is the same: hideously inaccurate beliefs about reality are hurting bystanders.
Tuesday, May 17, 2011
Multiplicative functions and almost division
A function f(n) with range in the integers is said to be multiplicative if f(ab)=f(a)f(b) whenever a and b are relatively prime. A function is said to be completely multiplicative if this applies for all a and b whether or not a and b are relatively prime. Some of my prior posts have discussed multiplicative functions (e.g. this post on Mersenne primes and perfect numbers). An important thing to note is that a multiplicative function is determined by its values at prime powers, while a completely multiplicative function is determined by its values at primes.
Let f and g be two functions from the positive integers to the integers. Let K(f,g)(x) be the number of n less than x such that f(n) does not divide g(n). We say that f almost always divides g if f(n) divides g(n) for almost all n if K(f,g)(x) = o(x) (or to put it another way, the set of exceptions has density zero). Obviously, almost divisibility is weaker than divisibility. Almost divisibility shares many of the properties of divisibility; it is a transitive relation. One important difference between divisibility and almost divisibility is that among positive functions, divisibility is an anti-symmetric relation (that is, if fg, and gf then f=g). This is not true for almost divisibility, but is true for almost divisibility if we restrict our attention to multiplicative functions.
Are there interesting examples of non-trivial almost divisibility where one doesn't have divisibility? Yes. Let σ(n) be the sum of the positive divisors of n, and let τ(n) be the number of positive divisors of n. Erdos showed that τ almost divides σ and showed through similar logic that τ almost divides the Euler φ function.
Why am I thinking about these functions? I noticed that some multiplicative functions divide a positive completely multiplicative function. Thus, for example φ divides the completely multiplicative f such that for any prime p, f(p)=p(p-1). However, other cases don't have such examples. It isn't too hard to show that there's no such completely multiplicative function for τ or σ. But, the situation changes drastically in the case of almost divisibility. We in fact have that for any function f from the natural number to the natural numbers, there is a completely multiplicative function g such that f almost divides g. We shall call such a g, a "completely multiplicative almost multiple" (or CMAM) Moreover, we can construct such a function explicitly.
Proposition: Let f(n) be a function from N to N, and let g(n) be the completely multiplicative function defined by g(p) = is the least common multiple of f(1),f(2),f(3)...f(2^p). Then g is a completely multiplicative almost multiple of f.
In order to prove this result, we need another notion, that of normal order. Often, to describe the behavior of a poorly behaved function, number theorists like to show that it asymptotic to a well behaved function in the sense that the limit of their ratios is 1. Thus, for example, the prime number theorem says that the poorly behaved function π(n) that counts the number of primes that are at most n is asymptotic to n/log n. But for many functions we care about this sort of statement is much too strong. Let ω(n) be the number of distinct prime divisors of n, and let Ω(n) be the total number of prime factors. Then these functions hit 1 infinitely often and so clearly are not asymptotic to any nice functions. However, ω and Ω possess a normal order, in the sense that for any ε > 0, we have that excepting a set of density 0, (1-ε) log log n < ω(n) < (1+ε) log log n and the same holds for Ω(n). (More refined estimates exist but we don't need them for these remarks.) An important heuristic that stems from this result is that most numbers look very close to square free in the sense that they have a lot of prime factors and very few of the prime factors occur more than once.
Now, to prove our result, we observe that excepting a set of density zero, we have Ω(n) < 2 log log n . So ignoring a set of density zero, the largest prime factor of n, call it p, is at least n^(1/(2 log log n)), and so f(n) appears as a term in g(p).
Obviously, for any f, the constructed CMAM grows very fast. However, this doesn't need to be the case, and it might be that in specific contexts one can substantially reduce this growth rate. So, this leaves me with three questions, one broad and two specific:
1) How much can we improve on reducing the growth rate of the CMAM for a function f? There are obvious steps to take in the construction but they still make it grow very quickly.
2) Let g(n) be a CMAM for σ. Does it follow that g(n) is not O(n^k) for any k? I strongly suspect this is true but don't see how to prove it.
3) One can through more careful work show that there is a CMAM for τ that is 0(x). What is the best possible growth rate for such a function?
While I've seen individual papers that have discussed specific functions being almost divisible by each other, I'm not aware any substantial work on their general structure. I don't know if this is due to my ignorance of the literature, the fault of almost divisibility being too weak a property to be interesting, or some other cause.
Let f and g be two functions from the positive integers to the integers. Let K(f,g)(x) be the number of n less than x such that f(n) does not divide g(n). We say that f almost always divides g if f(n) divides g(n) for almost all n if K(f,g)(x) = o(x) (or to put it another way, the set of exceptions has density zero). Obviously, almost divisibility is weaker than divisibility. Almost divisibility shares many of the properties of divisibility; it is a transitive relation. One important difference between divisibility and almost divisibility is that among positive functions, divisibility is an anti-symmetric relation (that is, if fg, and gf then f=g). This is not true for almost divisibility, but is true for almost divisibility if we restrict our attention to multiplicative functions.
Are there interesting examples of non-trivial almost divisibility where one doesn't have divisibility? Yes. Let σ(n) be the sum of the positive divisors of n, and let τ(n) be the number of positive divisors of n. Erdos showed that τ almost divides σ and showed through similar logic that τ almost divides the Euler φ function.
Why am I thinking about these functions? I noticed that some multiplicative functions divide a positive completely multiplicative function. Thus, for example φ divides the completely multiplicative f such that for any prime p, f(p)=p(p-1). However, other cases don't have such examples. It isn't too hard to show that there's no such completely multiplicative function for τ or σ. But, the situation changes drastically in the case of almost divisibility. We in fact have that for any function f from the natural number to the natural numbers, there is a completely multiplicative function g such that f almost divides g. We shall call such a g, a "completely multiplicative almost multiple" (or CMAM) Moreover, we can construct such a function explicitly.
Proposition: Let f(n) be a function from N to N, and let g(n) be the completely multiplicative function defined by g(p) = is the least common multiple of f(1),f(2),f(3)...f(2^p). Then g is a completely multiplicative almost multiple of f.
In order to prove this result, we need another notion, that of normal order. Often, to describe the behavior of a poorly behaved function, number theorists like to show that it asymptotic to a well behaved function in the sense that the limit of their ratios is 1. Thus, for example, the prime number theorem says that the poorly behaved function π(n) that counts the number of primes that are at most n is asymptotic to n/log n. But for many functions we care about this sort of statement is much too strong. Let ω(n) be the number of distinct prime divisors of n, and let Ω(n) be the total number of prime factors. Then these functions hit 1 infinitely often and so clearly are not asymptotic to any nice functions. However, ω and Ω possess a normal order, in the sense that for any ε > 0, we have that excepting a set of density 0, (1-ε) log log n < ω(n) < (1+ε) log log n and the same holds for Ω(n). (More refined estimates exist but we don't need them for these remarks.) An important heuristic that stems from this result is that most numbers look very close to square free in the sense that they have a lot of prime factors and very few of the prime factors occur more than once.
Now, to prove our result, we observe that excepting a set of density zero, we have Ω(n) < 2 log log n . So ignoring a set of density zero, the largest prime factor of n, call it p, is at least n^(1/(2 log log n)), and so f(n) appears as a term in g(p).
Obviously, for any f, the constructed CMAM grows very fast. However, this doesn't need to be the case, and it might be that in specific contexts one can substantially reduce this growth rate. So, this leaves me with three questions, one broad and two specific:
1) How much can we improve on reducing the growth rate of the CMAM for a function f? There are obvious steps to take in the construction but they still make it grow very quickly.
2) Let g(n) be a CMAM for σ. Does it follow that g(n) is not O(n^k) for any k? I strongly suspect this is true but don't see how to prove it.
3) One can through more careful work show that there is a CMAM for τ that is 0(x). What is the best possible growth rate for such a function?
While I've seen individual papers that have discussed specific functions being almost divisible by each other, I'm not aware any substantial work on their general structure. I don't know if this is due to my ignorance of the literature, the fault of almost divisibility being too weak a property to be interesting, or some other cause.
Monday, May 2, 2011
Osama bin Laden's death, Barack Obama, and the Judeo-Christian heritage of the United States: A response to my brother
In the last 24 hours, everyone has been talking about Osama bin Laden's death. Last night, after the President's speech, people around the country celebrated. Here in Boston, this apparently became another excuse for college students to get drunk. I heard well into the night screams of "USA! USA!" and at one point an attempt to sing "America, Fuck Yeah!" from Team America.
There were more measured responses including a piece in the Huffington Post by my brother Nathaniel. Unfortunately, many of the more measured responses, including this one, are misguided.
Nathaniel listed five aspects of the President's remarks which in his view stood out:
I don't have any significant problems with the second or fifth points, but the other three are problematic.
In his first point, Nathaniel portrays as a positive that which isn't. He also confuses a variety of different issues. Targeting high ranking terrorists and killing them is distinct from whether or not people such as Khalid Shaikh Mohammed should get civilian trials once they are in our custody. But we can direct our military against targets at the same time that we use civllian trials for those who are captured. The first World Trade Center bombers and the Oklahoma City bombers were both tried successfully in civil courts. Being at war does not mean we need to ignore due process.
Nathaniel's third point is deeply wrong. The Judeo-Christian heritage of this country is deeply exaggerated. The US Constitution bears almost no signs of Judeo-Christian values. There are only three signs of such religious influence in the Constitution, and all are comparatively minor. First, laws presented to the President become law in 10 days after presentation to the President, excepting Sundays. Second, the treason clause requires two witnesses of an overt act to convict or a confession in court. This requirement echos the Old Testament rule for conviction of severe crimes which requires testimony of two witnesses. Third, the Constitution is dated "in the year of our Lord," a conventional phrase at the time.
Moreover, Nathaniel's statement portrays the Judeo-Christian heritage in the worst light possible. I'm proud of my Jewish heritage. But there's something deeply wrong when that heritage's primary lesson is an endorsement of retributive justice. It is noteworthy that the most substantial impact of the Bible on the text of the Constitution is to make conviction and punishment more difficult, not to endorse retribution. Indeed, it is a common theme in that heritage that we understand that even our enemies are people who can suffer. At the Passover Seder, even as the deliverance of the Israelites is celebrated, we remove a drop of wine from the cup for each of the Ten Plagues, remembering the Egyptian suffering.
Most troubling of all is the notion that mentioning God constitutes a " welcome shift from the president who went out of his way to give a nod to "non-believers" in his inaugural address." Approximately 10% of the people in United States self-identify as having no religion, and about 2% of the U.S. population identifies as either atheist or agnostic. Non-believers in the US have ranged from Carl Sagan to George Clooney, from Neil deGrasse Tyson to Bill Gates. Non-believers are an important part of the United States, intertwined with everything that makes America a great nation. All citizens deserve the same respect, whether they are differ by skin color, politics, or religious beliefs.
Nathaniel's fourth point is also misguided. The building of the transcontinental railroad was a triumph of the American spirit. And yes, America is really in a decline when our example of "we can do everything" is to kill our enemies. We are the only nation that has ever sent people to the moon. Yet, no one has walked on the moon in forty years. The shuttle will soon no longer be operational, and the US will need to rely on Russia for space flight. We are in a decline. No speech can hide that. And pretending otherwise is not a good thing. We must fight that decline. But we cannot fight it if we do not acknowledge the threat.
Nathaniel ended his piece by saying that he was proud of the troops and proud of the President. I can understand being proud of the troops. They risked their lives. We should be thankful to those soldiers who put their lives on the line to protect what we hold dear. This is not a good reason to be proud of the Ppresident. Nothing he did substantially impacted this result. It is possible that actual policy changes by Obama somehow lead to these events by making it easier to track down Osama. But I've seen no indication of that. Let's not give him credit that he isn't due. It is likely that in the 2012 election I will vote to reelect Obama, but that has almost nothing to do with these events, and it shouldn't. Instead of responding to these recent events, we should all vote for whichever candidate we think will be the most competent President with the policies that are best.
There were more measured responses including a piece in the Huffington Post by my brother Nathaniel. Unfortunately, many of the more measured responses, including this one, are misguided.
Nathaniel listed five aspects of the President's remarks which in his view stood out:
1) Obama emphasized that America and al Qaeda are "at war." This is an important shift from the president who wanted to try Khalid Shaikh Mohammed in a civilian court in New York, and who vowed to shut down Gitmo during the 2008 campaign. The tone here tonight was clear: The terrorists who plot against the United States are (illegal) combatants who deserve the full front of our military fury, not our legal rights.
2) The president gave a nod to his predecessor, in an acknowledgment that America has never been, nor ever will be, at war with Islam. This took class and grace, and Obama merits credit for it.
3) This speech was traditional. From the inclusion of "under God" in his closing remarks, to the references to retributive "justice," Obama channeled the Judeo-Christian values that still define our nation -- again, a welcome shift from the president who went out of his way to give a nod to "non-believers" in his inaugural address.
4) Somehow, Obama managed to take this moment to combat feelings of American declinism. The memo: We can do anything we set out to do. Compare this simple yet effective message to his recent flop of a State of the Union speech, in which the example of our greatness was the fact that "America is the nation that built the transcontinental railroad." This moment disproves those who sing the song of the "fall of American Empire," resolve, and spirit.
5) Most importantly, Obama tonight reaffirmed America's role as a force for good in the world, a force that extends beyond our borders. After U.S. troops took a backseat in NATO operations against Muammar Gaddafi, many (including me) worried that our will to "oppose any foe" in the defense of liberty played second fiddle to the whims of the UN, EU, and the Arab League. Thankfully and surprisingly, Obama reaffirmed our commitment to be a "shining beacon on a hill" to light the world.
I don't have any significant problems with the second or fifth points, but the other three are problematic.
In his first point, Nathaniel portrays as a positive that which isn't. He also confuses a variety of different issues. Targeting high ranking terrorists and killing them is distinct from whether or not people such as Khalid Shaikh Mohammed should get civilian trials once they are in our custody. But we can direct our military against targets at the same time that we use civllian trials for those who are captured. The first World Trade Center bombers and the Oklahoma City bombers were both tried successfully in civil courts. Being at war does not mean we need to ignore due process.
Nathaniel's third point is deeply wrong. The Judeo-Christian heritage of this country is deeply exaggerated. The US Constitution bears almost no signs of Judeo-Christian values. There are only three signs of such religious influence in the Constitution, and all are comparatively minor. First, laws presented to the President become law in 10 days after presentation to the President, excepting Sundays. Second, the treason clause requires two witnesses of an overt act to convict or a confession in court. This requirement echos the Old Testament rule for conviction of severe crimes which requires testimony of two witnesses. Third, the Constitution is dated "in the year of our Lord," a conventional phrase at the time.
Moreover, Nathaniel's statement portrays the Judeo-Christian heritage in the worst light possible. I'm proud of my Jewish heritage. But there's something deeply wrong when that heritage's primary lesson is an endorsement of retributive justice. It is noteworthy that the most substantial impact of the Bible on the text of the Constitution is to make conviction and punishment more difficult, not to endorse retribution. Indeed, it is a common theme in that heritage that we understand that even our enemies are people who can suffer. At the Passover Seder, even as the deliverance of the Israelites is celebrated, we remove a drop of wine from the cup for each of the Ten Plagues, remembering the Egyptian suffering.
Most troubling of all is the notion that mentioning God constitutes a " welcome shift from the president who went out of his way to give a nod to "non-believers" in his inaugural address." Approximately 10% of the people in United States self-identify as having no religion, and about 2% of the U.S. population identifies as either atheist or agnostic. Non-believers in the US have ranged from Carl Sagan to George Clooney, from Neil deGrasse Tyson to Bill Gates. Non-believers are an important part of the United States, intertwined with everything that makes America a great nation. All citizens deserve the same respect, whether they are differ by skin color, politics, or religious beliefs.
Nathaniel's fourth point is also misguided. The building of the transcontinental railroad was a triumph of the American spirit. And yes, America is really in a decline when our example of "we can do everything" is to kill our enemies. We are the only nation that has ever sent people to the moon. Yet, no one has walked on the moon in forty years. The shuttle will soon no longer be operational, and the US will need to rely on Russia for space flight. We are in a decline. No speech can hide that. And pretending otherwise is not a good thing. We must fight that decline. But we cannot fight it if we do not acknowledge the threat.
Nathaniel ended his piece by saying that he was proud of the troops and proud of the President. I can understand being proud of the troops. They risked their lives. We should be thankful to those soldiers who put their lives on the line to protect what we hold dear. This is not a good reason to be proud of the Ppresident. Nothing he did substantially impacted this result. It is possible that actual policy changes by Obama somehow lead to these events by making it easier to track down Osama. But I've seen no indication of that. Let's not give him credit that he isn't due. It is likely that in the 2012 election I will vote to reelect Obama, but that has almost nothing to do with these events, and it shouldn't. Instead of responding to these recent events, we should all vote for whichever candidate we think will be the most competent President with the policies that are best.
Saturday, April 2, 2011
Political Affiliation and Scientific Knowledge Levels
Previously on this blog, I've talked about differing intelligence levels and knowledge levels of different political groups. I've also decried the extreme anti-science and anti-intellectual views that have been articulated by conservative spokespersons in the United States. Thus, I was interested in some recent work which suggests that by some metrics the political right is more pro-science than the left. Audacious Epigone showed that GSS data demonstrated that, on average, Republicans are more pro-science and scientifically literate than Democrats. Epigone made no effort to control for variables, such as income, education and race.
Epigone's statement prompted Razib Kahn to do a similar, more detailed analysis focusing on science knowledge and attitudes. Kahn organized the data by political self-identification along the conservative-liberal continuum rather than by an individual's party affiliation. Razib's analysis suggests shows that conservatives and liberals are almost indistinguishable in overall knowledge level. However, when one removes the questions that discuss specific pet issues of the modern right-wing (i.e. those related to evolution and the age of the Earth), conservatives arguably pull ahead slightly in terms of scientific knowledge. At the same time, the data shows that moderates are less scientifically literate and less science-friendly than both conservatives and liberals. On all of the 19 variables that Razib examined, political moderates never come out on top. That is, for each question, sometimes conservatives perform best, and sometimes liberals, but never moderates. This is consistent with other results that show that in general moderates are less intelligent and less educated than other groups. For example, moderates have lower vocabulary scores than the general population. Razib performed additional analysis to try to control for other variables and his piece is worth reading.
This data suggests that there is a significant and underappreciated disconnect between right-wing leaders and self-identifying conservatives. If the individuals on the right right aren't statistically distinguishable from the left when it comes to science issues why do so many conservative politicans go out of their way to make anti-science remarks? There are a variety of possible explanations, but none of them are satisfactory.
First, many of these anti-science comments have been directed towards biology and matters related to biology (e.g. John McCain's remarks about bear DNA and Sarah Palin's remark about fruit fly research.) It is possible that the religious right's negative attitude towards evolution is carrying over to biology as a whole. However, this doesn't explain the remarks about other scientific areas (such as Bobby Jindal's remark about volcano monitoring). Moreover, although the human evolution question is by far the one with the most extreme difference between liberals and conservatives, (The percentages that accept human evolution according to the GSS are 69% for liberals, 52% for moderates and 39% for conservatives), the other questions suggest that conservative attitudes about evolution have not spread to other areas of biology. For example, when asked if the statement "Antibiotics kill viruses as well as bacteria" is true or false such as understanding that antibiotics cannot harm viruses 60% of liberals answered correctly while 63% of conservatives answered correctly. (The statement is false.)
Second, right-wing leaders may understand that most conservatives are not anti-science but think that the more active conservatve base is heavily anti-science. Without more data, it is hard to test if the active conservative base is substantially more anti-science than rank-and-file conservatives. Eeven if this is the case, it seems there are more effective ways of energizing the conservative base than anti-science rhetoric.
Third, right-wing politicians mayhave erroneously bought into the false stereotypes about their own constituents. Given the prevalence of such stereotypes, this seems most likely. This hypothesis is also difficult to test because politicians aren't going to admit that they've been pandering to rubes. Unfortunately, this belief among the right-wing leaders that conservatives are anti-science could easily act as a self-fulfilling prophecy if it causes pro-science conservatives to either stop being conservative, or causes some conservatives to become more anti-science to fit their tribal allegiance. However, this possibility does have a bright side: If this explanation is correct, then right-wing politicians are likely to be more pro-science in practice than they appear to be in public. Moreover, conservative politicians may act pro-science if they can be convinced that their constituents really aren't as anti-science as the politicians them to be.
Epigone's statement prompted Razib Kahn to do a similar, more detailed analysis focusing on science knowledge and attitudes. Kahn organized the data by political self-identification along the conservative-liberal continuum rather than by an individual's party affiliation. Razib's analysis suggests shows that conservatives and liberals are almost indistinguishable in overall knowledge level. However, when one removes the questions that discuss specific pet issues of the modern right-wing (i.e. those related to evolution and the age of the Earth), conservatives arguably pull ahead slightly in terms of scientific knowledge. At the same time, the data shows that moderates are less scientifically literate and less science-friendly than both conservatives and liberals. On all of the 19 variables that Razib examined, political moderates never come out on top. That is, for each question, sometimes conservatives perform best, and sometimes liberals, but never moderates. This is consistent with other results that show that in general moderates are less intelligent and less educated than other groups. For example, moderates have lower vocabulary scores than the general population. Razib performed additional analysis to try to control for other variables and his piece is worth reading.
This data suggests that there is a significant and underappreciated disconnect between right-wing leaders and self-identifying conservatives. If the individuals on the right right aren't statistically distinguishable from the left when it comes to science issues why do so many conservative politicans go out of their way to make anti-science remarks? There are a variety of possible explanations, but none of them are satisfactory.
First, many of these anti-science comments have been directed towards biology and matters related to biology (e.g. John McCain's remarks about bear DNA and Sarah Palin's remark about fruit fly research.) It is possible that the religious right's negative attitude towards evolution is carrying over to biology as a whole. However, this doesn't explain the remarks about other scientific areas (such as Bobby Jindal's remark about volcano monitoring). Moreover, although the human evolution question is by far the one with the most extreme difference between liberals and conservatives, (The percentages that accept human evolution according to the GSS are 69% for liberals, 52% for moderates and 39% for conservatives), the other questions suggest that conservative attitudes about evolution have not spread to other areas of biology. For example, when asked if the statement "Antibiotics kill viruses as well as bacteria" is true or false such as understanding that antibiotics cannot harm viruses 60% of liberals answered correctly while 63% of conservatives answered correctly. (The statement is false.)
Second, right-wing leaders may understand that most conservatives are not anti-science but think that the more active conservatve base is heavily anti-science. Without more data, it is hard to test if the active conservative base is substantially more anti-science than rank-and-file conservatives. Eeven if this is the case, it seems there are more effective ways of energizing the conservative base than anti-science rhetoric.
Third, right-wing politicians mayhave erroneously bought into the false stereotypes about their own constituents. Given the prevalence of such stereotypes, this seems most likely. This hypothesis is also difficult to test because politicians aren't going to admit that they've been pandering to rubes. Unfortunately, this belief among the right-wing leaders that conservatives are anti-science could easily act as a self-fulfilling prophecy if it causes pro-science conservatives to either stop being conservative, or causes some conservatives to become more anti-science to fit their tribal allegiance. However, this possibility does have a bright side: If this explanation is correct, then right-wing politicians are likely to be more pro-science in practice than they appear to be in public. Moreover, conservative politicians may act pro-science if they can be convinced that their constituents really aren't as anti-science as the politicians them to be.
Labels:
Bobby Jindal,
McCain,
politics,
Sarah Palin,
science
Wednesday, March 9, 2011
Illinois and the Death Penalty
Governor Pat Quinn of Illinois just signed a law which abolishes the death penalty in that state. This legislation is to some extent symbolic, in that Illinois has had a moratorium on the death penalty for almost a decade. With this legislation, Illinois joins New Mexico and New Jersey as states that have recently abolished the death penalty.
I have discussed here before my attitude towards the death penalty, especially in regards to specific cases such as the ongoing case of Hank Skinner . I'm not intrinsically against the death penalty: societies have the right in general to execute those who have violated the social contract in particularly heinous fashions if that will assist the public good. However, the death penalty as practiced in the United States is capricious and disorderly. Prosecutors push for death sentences when it is politically convenient, and their are huge racial disparities in who is executed.
It is also clear that innocent people have been wrongly convicted, and that in at least some cases, innocent people have been executed such as Cameron Todd Willinngham. Evidentiary standards that allow junk science and superficially persuasive eyewitness testimony are leaving blood on all our hands. For a long time, I have found striking a certain section in the Biblical book of my namesake, Joshua. the Israelites are called thieves when one man steals at Jericho. How much worse are we as a society that we as a democracy repeatedly elect and reelect officials who kill innocent people in our name?
In the particular case of the state of Illinois, I have additional personal reason to be interested in this legislation. My uncle Seymour Simon was a justice on the Illinois Supreme Court. He was a staunch opponent of the death penalty and argued forcefully and unceasingly that the death penalty as implemented in Illinois was unconsciousable and unconstitutionally capricious. He died in 2006, and so did not live to see this legislation. I suspect I know how he would have responded if he had seen this: Illinois is down. Only thirty-five more states to go.
I have discussed here before my attitude towards the death penalty, especially in regards to specific cases such as the ongoing case of Hank Skinner . I'm not intrinsically against the death penalty: societies have the right in general to execute those who have violated the social contract in particularly heinous fashions if that will assist the public good. However, the death penalty as practiced in the United States is capricious and disorderly. Prosecutors push for death sentences when it is politically convenient, and their are huge racial disparities in who is executed.
It is also clear that innocent people have been wrongly convicted, and that in at least some cases, innocent people have been executed such as Cameron Todd Willinngham. Evidentiary standards that allow junk science and superficially persuasive eyewitness testimony are leaving blood on all our hands. For a long time, I have found striking a certain section in the Biblical book of my namesake, Joshua. the Israelites are called thieves when one man steals at Jericho. How much worse are we as a society that we as a democracy repeatedly elect and reelect officials who kill innocent people in our name?
In the particular case of the state of Illinois, I have additional personal reason to be interested in this legislation. My uncle Seymour Simon was a justice on the Illinois Supreme Court. He was a staunch opponent of the death penalty and argued forcefully and unceasingly that the death penalty as implemented in Illinois was unconsciousable and unconstitutionally capricious. He died in 2006, and so did not live to see this legislation. I suspect I know how he would have responded if he had seen this: Illinois is down. Only thirty-five more states to go.
Thursday, February 24, 2011
Space Shuttle Discovery launch
I'm currently in Florida with friends watching the launch of Discovery. This will be the second to last shuttle launch (there's a small chance that this will be the third to last launch if they get enough funding for a final launch). I and Tyrol5 wrote an article for Wikinews about the launch which includes a picture taken by me of the launch. There are additional photos on my Facebook page.
Monday, February 14, 2011
On rotations of spheres
Consider a sphere living in some number of dimensions. The sphere we are used to is the 2-sphere, denoted by S2. Mathematicians call it this because even though it lives in 3-dimensions, it is itself a 2-dimensional object (in the sense that small sections of it look like the 2-dimensional plane). One can similarly talk about the n-sphere, which lives in n+1 dimensions. Thus for example, the 1-sphere, S1, can be thought of all all points on a plane that are of distance one from the origin.
When one has a geometric object one of the most obvious things to do is to ask what rigid movements of the object will take it to itself. Thus, for example, for a sphere, a rotation about some axis through the sphere's center rigidly moves the sphere to itself.
Rotations seem simple, but they can be surprisingly tricky. For the 1-sphere if one does a rotation and then another rotation one is left with a rotation. This is an important and helpful property. It says essentially that rotations form what mathematicians call a subgroup of the group of all rigid motions. It turns out that this is still true for the 2-sphere, S2 even when one uses different axises for the two rotation Now, you might find surprising the fact that for S3, the sphere of three dimensions living in four dimensions, this breaks down. The composition of rotations is not necessarily a rotation. In some sense the not obvious fact is not why this breaks down for three dimensions, but why it still holds true in two dimensions.
The above is well known to mathematicians or to many people who have played around a bit with geometric objects. But, I recently learned a related fact that I found startling. Call a rotation "periodic" if we eventually get every point back to where we started. So for example, if one repeat a 90 degree rotation (π/2 for those using radians) four times we will have every point back where we started. Now, it turns out that for rotations of S2 even though the composition of rotations is a rotation, the composition of periodic rotations is not necessarily periodic. Once one knows that this is true it is easy construct examples. Consider two rotations around perpendicular axises each of 30 degrees (π/6 in radians). It isn't difficult to show that their composition although still a rotation is not periodic. This is a good example of how even basic geometry can surprise us even when we think we understand it.
When one has a geometric object one of the most obvious things to do is to ask what rigid movements of the object will take it to itself. Thus, for example, for a sphere, a rotation about some axis through the sphere's center rigidly moves the sphere to itself.
Rotations seem simple, but they can be surprisingly tricky. For the 1-sphere if one does a rotation and then another rotation one is left with a rotation. This is an important and helpful property. It says essentially that rotations form what mathematicians call a subgroup of the group of all rigid motions. It turns out that this is still true for the 2-sphere, S2 even when one uses different axises for the two rotation Now, you might find surprising the fact that for S3, the sphere of three dimensions living in four dimensions, this breaks down. The composition of rotations is not necessarily a rotation. In some sense the not obvious fact is not why this breaks down for three dimensions, but why it still holds true in two dimensions.
The above is well known to mathematicians or to many people who have played around a bit with geometric objects. But, I recently learned a related fact that I found startling. Call a rotation "periodic" if we eventually get every point back to where we started. So for example, if one repeat a 90 degree rotation (π/2 for those using radians) four times we will have every point back where we started. Now, it turns out that for rotations of S2 even though the composition of rotations is a rotation, the composition of periodic rotations is not necessarily periodic. Once one knows that this is true it is easy construct examples. Consider two rotations around perpendicular axises each of 30 degrees (π/6 in radians). It isn't difficult to show that their composition although still a rotation is not periodic. This is a good example of how even basic geometry can surprise us even when we think we understand it.
Wednesday, February 2, 2011
Obama's State of the Union Address: A Response to Nathaniel
President Obama delivered his State of the Union address last week. My younger brother has a piece in the Yale Daily News which extensively criticizes the speech. Nathaniel complains that the speech started with an inspirational message but "soon fell into a quagmire of policy." Nathaniel criticizes the speech for focusing too much on policy minutia and argues that, while referencing Kennedy, Obama fails to understand that inspiration requires large goals, not policy details. Overall, Nathaniel is correct that this was a disappointing speech. However, it was not disappointing because it went into detail: it was disappointing because the details were unimpressive.
While Nathaniel is correct that the speech was not very inspirational, he misses a broader point: We need policy expertise, and a President who is willing to present a State of the Union that discusses real policy issues is a good thing. After 8 years of George W Bush's flag-wrapping jingoism, after 8 years of Bill Clinton's contentless addresses, the American people should be happy that a President is willing to discuss serious policy issues. And if the American people don't like that, that's a problem with the American people, not a problem with their President.
Nathaniel has two other criticisms of the speech: that Obama was unwilling to take a stand on anything remotely controversial, and that Obama called for largescale spending that the recent elections show is not desired by the American people. These criticisms seem contradictory. One cannot in one paragraph complain that Obama was a coward and then in the next paragraph complain that Obama is too willing to engage in spending that many people don't want.
Some parts of Nathaniel's column do have merit. Nathaniel correctly identifies that Obama's comments about school reform were close to toothless. And Nathaniel correctly points out that Obama did not emphasize the degree to which the proposed initiative will place heavy burdens on the American people. And it seems disingenuous for Obama to claim to be willing to fund the " Apollo Projects of our time" while calling for a freeze in domestic spending.
While I disagree with much of Nathaniel's criticism, I am far from happy with Obama's speech. It certainly had its good points, but it had many failings. Obama failed badly in his discussion of energy policy. He gets points for mentioning nuclear power, something he has in the past downplayed. However, when discussing biofuels, he made no mention of the fact that the most prominent attempt at biofuel, corn based ethanol, is inefficient, environmentally damaging, and raises food prices which hurts the poor here as well as people in the developing world.
Similarly, Obama's emphasis on electric cars was similarly unpersuasive. The President did not discuss that electric cars’ power has to come from the general grid. While electric cars do overall pollute less and use less energy, this stems from economies of scale more than anything else. And the impact is not high. While Obama did discuss the energy of the future, he proposed no funding for research into genuinely new energy sources, such as fusion power. (In the particular case of fusion power, the US is putting resources into ITER, the international tokamak reactor, but the US is putting no money into other forms of fusion such as stellarators.)
Obama's discussion of school systems used metrics which are less than ideal. In particular, the fraction of a population which is going to college is an awful metric for success of students. Students who go to college are often unprepared and often come out not much better prepared than they went in. Moreover, many jobs, even high-tech jobs, don't require college degrees. We have serious problems with people going to college on loans they are unable to repay.
Obama was correct to talk about policy proposals in his address. The problem is not the discussion but the content of those policies.
While Nathaniel is correct that the speech was not very inspirational, he misses a broader point: We need policy expertise, and a President who is willing to present a State of the Union that discusses real policy issues is a good thing. After 8 years of George W Bush's flag-wrapping jingoism, after 8 years of Bill Clinton's contentless addresses, the American people should be happy that a President is willing to discuss serious policy issues. And if the American people don't like that, that's a problem with the American people, not a problem with their President.
Nathaniel has two other criticisms of the speech: that Obama was unwilling to take a stand on anything remotely controversial, and that Obama called for largescale spending that the recent elections show is not desired by the American people. These criticisms seem contradictory. One cannot in one paragraph complain that Obama was a coward and then in the next paragraph complain that Obama is too willing to engage in spending that many people don't want.
Some parts of Nathaniel's column do have merit. Nathaniel correctly identifies that Obama's comments about school reform were close to toothless. And Nathaniel correctly points out that Obama did not emphasize the degree to which the proposed initiative will place heavy burdens on the American people. And it seems disingenuous for Obama to claim to be willing to fund the " Apollo Projects of our time" while calling for a freeze in domestic spending.
While I disagree with much of Nathaniel's criticism, I am far from happy with Obama's speech. It certainly had its good points, but it had many failings. Obama failed badly in his discussion of energy policy. He gets points for mentioning nuclear power, something he has in the past downplayed. However, when discussing biofuels, he made no mention of the fact that the most prominent attempt at biofuel, corn based ethanol, is inefficient, environmentally damaging, and raises food prices which hurts the poor here as well as people in the developing world.
Similarly, Obama's emphasis on electric cars was similarly unpersuasive. The President did not discuss that electric cars’ power has to come from the general grid. While electric cars do overall pollute less and use less energy, this stems from economies of scale more than anything else. And the impact is not high. While Obama did discuss the energy of the future, he proposed no funding for research into genuinely new energy sources, such as fusion power. (In the particular case of fusion power, the US is putting resources into ITER, the international tokamak reactor, but the US is putting no money into other forms of fusion such as stellarators.)
Obama's discussion of school systems used metrics which are less than ideal. In particular, the fraction of a population which is going to college is an awful metric for success of students. Students who go to college are often unprepared and often come out not much better prepared than they went in. Moreover, many jobs, even high-tech jobs, don't require college degrees. We have serious problems with people going to college on loans they are unable to repay.
Obama was correct to talk about policy proposals in his address. The problem is not the discussion but the content of those policies.
Thursday, January 6, 2011
Review of Jason Rosenhouse's "The Monty Hall Problem"
I've had the recent pleasure of reading Jason Rosenhouse's "The Monty Hall Problem." Rosenhouse's book is a comprehensive investigation into the eponymous Monty Hall problem, variations of the problem, and the larger implications of the problem.
The original Monty Hall problem named after a game played on an old television game show "Let's Make a Deal" with host Monty Hall. Rosenhouse describes the problem as:
(Presumably you are attempting to maximize your chance of winning one's chance of getting a car). Most people conclude that there's no benefit from switching. The general logic against switching is that after the elimination of a door there are two doors remaining, so each should now have a 1/2 chance of containing the door.
This logic is incorrect. One door has a 2/3rds chance of getting the car if one's general strategy is switching. Many people find this claim extremely counterintuitive. To see quickly the correctness of this claim, note that if one chooses a strategy of to always switching, then one will switch to the correct car-containing door exactly when your original door was not the car door. This will occur 2/3rd of the time.
Many people have great difficulty accepting the correct solution to the Monty Hall problem. This includes not just laypeople, but also professional mathematicians, including most famously Paul Erdos who initially did not accept the answer. The problem, and variants thereof, not only raise interesting questions of probability but also give insight into how humans think about probability.
Rosenhouse's book is very well done. He looks not just at the math, but also the history of the problem, and philosophical and psychological implications of the problem. For example, he discusses studies which show that cross-culturally the vast majority of people when given the problem will not switch. I was unaware until I read this book how much cross-disciplinary work there had been surrounding the Monty Hall problem. Not all of this work has been that impressive, and Rosenhouse correctly points out where much of the philosophical argumentation over the problem simply breaks downs. Along the way, Rosenhouse explains such important concepts as Bayes' Theorem (where he uses the simple discrete case), the different approaches to what probabilities mean (classical, frequentist, and Bayesian) and their philosophical implications. The book could easily be used for supplementary reading for an undergraduate course in probability or reading for an interested highschool student.
By far the most interesting parts of the book were the chapters focusing on the psychological aspects of the problem. Systematic investigation of the common failure of people to correctly analyze the Monty Hall problem has lead to much insight about how humans reason about probability. This analysis strongly suggests that humans use a variety of heuristics which generally work well for many circumstances humans run into but break down in extreme cases. In a short blog post I can’t do justice to the clever, sophisticated experimental set-ups used to test the nature and extent of these heuristics, so I'll simply recommend that people read the book.
For my own part, I'd like to use this as an opportunity to propose two continuous versions of the Monty Hall problem that to my knowledge have not been previously discussed. Consider a circle of circumference 1. A point is randomly picked as the target point on circle (and not revealed to you). You then pick a random interval of length 1/3rd on the circle. Monty knows where the target point is. If you picked an interval that contains the target point, Monty picks a random 1/3rd interval that doesn't overlap your interval and reveals that interval as not containing the target point. If your interval does not contain the target point, Monty instead picks uniformly a 1/3rd interval that doesn't include the target point and doesn't overlap with your interval. At the end of this process, you have with probability one, three possible intervals that might contain the target point, your original interval, or the intervals on either side of Monty's revealed interval. You are given the option to switch to one of these new intervals. Should you switch and if so to which interval?
I'm pretty sure that the answer in this modified form is also to switch, in this case switching to the larger of the two new intervals.
However, the situation becomes a bit trickier if we modify it a bit. Consider the following situation that is identical to the above, but instead of Monty cutting out an interval of length 1/3rd, he picks k intervals of each length 1/(3k) (thus the initial case above is k=1). Monty picks where to place these intervals by each picking one of the valid intervals uniformly and then going on to the other, then revealing the locations of all his intervals at the end. The remaining choices for an interval for you to pick are your original interval or any of the smaller intervals created in between Monty's choices. You get an option to stay or to switch to one of these intervals. It seems clear that even for k=2, sometiimes you should switch and sometimes you should not switch, depending on the locations of Monty's intervals. However, it isn't clear to me when to stay and when to switch. Thoughts are welcome.
The original Monty Hall problem named after a game played on an old television game show "Let's Make a Deal" with host Monty Hall. Rosenhouse describes the problem as:
You are shown three identical doors. Behind one of them is a car. The other two conceal goats. You are asked to choose, but not open one of the doors. After doing so, Monty, who knows where the car is, opens one of the two remaining doors. He always opens a door he knows to be incorrect, and randomly chooses which door to open when he has a more than one option (which happens on those occasions where your initial choice conceals the car). After opening an incorrect door, Monty gives you the option of either switching to the other unopened door or sticking with your original choice. You then receive whatever is behind the door you choose. What should you do?
(Presumably you are attempting to maximize your chance of winning one's chance of getting a car). Most people conclude that there's no benefit from switching. The general logic against switching is that after the elimination of a door there are two doors remaining, so each should now have a 1/2 chance of containing the door.
This logic is incorrect. One door has a 2/3rds chance of getting the car if one's general strategy is switching. Many people find this claim extremely counterintuitive. To see quickly the correctness of this claim, note that if one chooses a strategy of to always switching, then one will switch to the correct car-containing door exactly when your original door was not the car door. This will occur 2/3rd of the time.
Many people have great difficulty accepting the correct solution to the Monty Hall problem. This includes not just laypeople, but also professional mathematicians, including most famously Paul Erdos who initially did not accept the answer. The problem, and variants thereof, not only raise interesting questions of probability but also give insight into how humans think about probability.
Rosenhouse's book is very well done. He looks not just at the math, but also the history of the problem, and philosophical and psychological implications of the problem. For example, he discusses studies which show that cross-culturally the vast majority of people when given the problem will not switch. I was unaware until I read this book how much cross-disciplinary work there had been surrounding the Monty Hall problem. Not all of this work has been that impressive, and Rosenhouse correctly points out where much of the philosophical argumentation over the problem simply breaks downs. Along the way, Rosenhouse explains such important concepts as Bayes' Theorem (where he uses the simple discrete case), the different approaches to what probabilities mean (classical, frequentist, and Bayesian) and their philosophical implications. The book could easily be used for supplementary reading for an undergraduate course in probability or reading for an interested highschool student.
By far the most interesting parts of the book were the chapters focusing on the psychological aspects of the problem. Systematic investigation of the common failure of people to correctly analyze the Monty Hall problem has lead to much insight about how humans reason about probability. This analysis strongly suggests that humans use a variety of heuristics which generally work well for many circumstances humans run into but break down in extreme cases. In a short blog post I can’t do justice to the clever, sophisticated experimental set-ups used to test the nature and extent of these heuristics, so I'll simply recommend that people read the book.
For my own part, I'd like to use this as an opportunity to propose two continuous versions of the Monty Hall problem that to my knowledge have not been previously discussed. Consider a circle of circumference 1. A point is randomly picked as the target point on circle (and not revealed to you). You then pick a random interval of length 1/3rd on the circle. Monty knows where the target point is. If you picked an interval that contains the target point, Monty picks a random 1/3rd interval that doesn't overlap your interval and reveals that interval as not containing the target point. If your interval does not contain the target point, Monty instead picks uniformly a 1/3rd interval that doesn't include the target point and doesn't overlap with your interval. At the end of this process, you have with probability one, three possible intervals that might contain the target point, your original interval, or the intervals on either side of Monty's revealed interval. You are given the option to switch to one of these new intervals. Should you switch and if so to which interval?
I'm pretty sure that the answer in this modified form is also to switch, in this case switching to the larger of the two new intervals.
However, the situation becomes a bit trickier if we modify it a bit. Consider the following situation that is identical to the above, but instead of Monty cutting out an interval of length 1/3rd, he picks k intervals of each length 1/(3k) (thus the initial case above is k=1). Monty picks where to place these intervals by each picking one of the valid intervals uniformly and then going on to the other, then revealing the locations of all his intervals at the end. The remaining choices for an interval for you to pick are your original interval or any of the smaller intervals created in between Monty's choices. You get an option to stay or to switch to one of these intervals. It seems clear that even for k=2, sometiimes you should switch and sometimes you should not switch, depending on the locations of Monty's intervals. However, it isn't clear to me when to stay and when to switch. Thoughts are welcome.
Subscribe to:
Posts (Atom)