When we look at finite sets, how can we tell if they are the same size without counting? Imagine a primitive tribe that determines social status based on who owns more goats, but no one in the tribe can count past 5. If two members of the tribe both own more than 5 goats , how can the tribe compare the total number of goats the two people own? One solution is they can take out pairs of goats with one goat from each person. When one of the two goat owners runs out of goats first, that person has fewer goats. If they run out at the same time, they have the same number of goats. (This goat analogy is a good analogy, but is not, as far as I am aware, frequently used to motivate what we are doing here. I'm not sure where I saw this analogy. If someone can point out where it comes from, I'd appreciate it.)
We wish to do the same thing with infinite sets. We are like the primitive tribe and we can only count finite numbers. So we say that two sets in general are the same size if there is a function which maps elements from one to the other with no overlap and no element missed. That is, if we have two sets A and B, if they are of the same size, then there is a function f from A to B such that for any b in B there is some a in A such that f(a)=b and for any x and y in A f(x)=f(y) if and only if x=y. A mathematician would call such a function a bijection or say that it is one-to-one and onto. Thus, for example the sets {a,b,c} and {cow, potato, glockenspiel} are the same size and we could define the function f by f(a)=cow, f(b)=potato and f(c)=glockenspiel. We can generalize this notion to sets that are not necessarily finite. We call this notion "cardinality". We say that |A|=|B| if A and B share a function like that above. We similarly say that |A|<|B| if there is no such function, but there is a subset C of B such that |A|=|C|. It is not hard to see that cardinality obeys the properties one would expect a notion of equal size to obey. For example, if |A|=|B| then |B|=|A|. Similarly, if |A|=|B| and |B|=|C| then |A|=|C|.
At first glance, cardinality leads to some strange results. For example, the positive even integers are the same cardinality as the positive integers. We can see this by setting f(n)=2n. So a set can be the same cardinality as a proper subset of itself. That's not intuitive. And if one plays around a bit, one will see that any infinite subset of the natural numbers is the same cardinality as the set of natural numbers.
One might tentatively conclude that all infinite sets are the same cardinality. In the 1870s, Cantor systematically examined infinite sets and first made the notion of cardinality precise. Cantor first showed that the natural numbers have the same cardinality as the rational numbers.
This is even more counterintuitive than the earlier results since the rationals are dense on the real line, but this result fits with our tentative conclusion that all infinite sets are the same cardinality.
There are a variety of proofs of Cantor’s result. One method is to show that N, the set of natural numbers, has the same cardinality as N x N, the set of ordered pairs of natural numbers. We can use the map (m,n) -> k = 2(m-1)(2n-1). Thus, for any positive natural number k, m determines the highest power of 2 that divides k and n determines the largest odd divisor. It is easy to see from that m and n uniquely determines k. From that result it is not hard to show that that the rationals and natural numbers have the same cardinality.
Then Cantor hit upon a surprising result. Cantor showed that there was no one-to-one and onto function from the natural numbers to the real numbers. Not all infinite sets are the same size. We shall discuss this and related results in a follow up blog entry.
7 comments:
Is there a way to make Cantor's argument work if we insist on representing real numbers with binary instead of decimal digits?
There's probably already something along those lines in crank.net's "Cantor was wrong" section. :)
It is actually a curious problem that binary doesn't work for this argument. I have seen cranks before make issue of the fact that this is base dependent.
The problem for binary extends essentially because of the issue of rollover (just as we have to avoid the problem of .019999 becoming .0200...). There is a cludge solution to this: Assume you have such a 1-1 onto function and the diaganolization gives you 1s after some sufficientlty large n. You can show that there is some permutation of the list which does not have that problem. And then you can diagonalize that and reach the same conclusion. If I have time I'll discuss that in the followup entry on this topic.
I remember the goat analogy being mentioned at PROMYS 2003 at a guest lecture (though I forget whose). Maybe it's from there? (I've been using it since, too; it really helps people understand the concept).
Tom, that's possible but I seem to remember seeing the analogy in a book before that. So not sure.
...are comments just not working for me, or were my past 2 comments just moderated for being unhelpful? :-/
FWIW I also definitely remember seeing it in some book before PROMYS, I thought it was common...
Harry, I don't see any unmoderated comments from you and I don't recall not approving any. Could you maybe resubmit them? Something must have ate them.
Well, it was one comment, submitted twice, because the first one never appeared. It was just saying, yeah, I thought it was common, it was what I always used, hey, I have some of those - what might you call them, "popular mathematics"? - books by Eli Maor lying around, let me see if he uses it in "To Infinity and Beyond"... no, he doesn't.
Also I like to prove 2^|N| > |N| and |R|=2^|N| separately....
Post a Comment