(note: this post has been closed to comments; comments about it on other pages will be deleted!)
Okay, so I'm still on vacation, but it's clearly time for me to step back into this. Thanks to the people at scienceblogs, namely Goodmath/Badmath and the EvolutionBlog for taking up my cause. And thanks very much to the people in the comments of my other posts for doing their best to explain. Some of my explanations here use some of their ideas.
Let's do this by classifying the problems people have with my posts, from the most ridiculous to the least (roughly):
You must be a public school teacher, and I fear for your students. You don't know enough math to teach it. Stop filling their heads with nonsense.
Wow. I actually teach at a private school. Some of my former students have gone on to get perfect scores on SAT's, study advanced math at top-level universities, and place well at the national level in math competitions. I have B.A. in math from a top-20 school. All the information online (you can start here and here) agrees with me. In addition, while I know that there are public high school teachers who are not very good with math, all of the ones I know who teach at, say, the pre-calculus level or higher also agree with me. Please don't insult the entire profession just to discredit me. If you're that worried, go to graduate school and study math, and teach it yourself.
.999... is clearly less than 1, but mathematics isn't advanced enough to handle infinity, so you can't prove it. My intuition isn't flawed, math is.
This is another one that's so wrong that I barely know where to start. Historically speaking, this debate is quite old. In the 19th century, this apparent paradox (.999... is clearly less than 1 vs. .999... equals 1) was addressed by some great mathematicians (Cauchy and Dedekind, among others). In particular, they formalized the notion that the real numbers are infinitely finely divisible, and in their formulations, all arithmetic operations worked the way they seemed like they should. Using their formulations, all proofs result in .999...=1. Those formulations are discussed more at the wikipedia site linked above. Mathematics has been handling infinity well (using definitions that don't even require the use of the word 'infinity') for at least 100 years now. Until you've studied that work, you might want to be careful about saying that math can't handle infinity.
Variations on: 1 - .666... = .444..., but .999... - .666... = .333..., so 1 and .999... can't be equal.
You'd think that people trying to argue with mathematicians would at least check their work:
1 - .666... = .444...? If that's true then to check it, I should be able to add .666... and .444... and get 1. But .6 + .4 is already 1. So the sum of two larger numbers can't also equal 1. The check fails.
Variations on:
10x = 9.5
- x = .5
9x = 9So x = 1 and also x = .5. SEE!! I can use your stupid method to prove that 1 = .5!
If x is .5, then 10x is 5, not 9.5. Your equations are unrelated, so of course you can prove something false.
(A lot of birds with one stone):
10 * .999... isn't 9.999..., it's 9.999...0
1 - .999... is .000...1
2 * .999... is 1.999...8
This is a common mistake among my students. You're mistaking notation for mathematics. Notation is not mathematics. Mathematics is the study of ideas about patterns and numbers, and we have to invent notations to communicate those ideas. Just because you can write something that looks mathematical, that doesn't imply that what you wrote has meaning. The number written as .999... has meaning:
9*(1/10) + 9*(1/100) + 9*(1/1000) + ...
Every 9 in the list means 9*(1/something). But .000...1, for example, is an abuse of notation. It doesn't correspond with any meaning, so it doesn't communicate anything. If you think it means something (and putting 1 at the END of an ENDLESS list of zeros shouldn't mean anything), you're going to have to tell me what's in the denominator of the fraction represented by that decimal place. If you can't tell me that denominator, you're not using the notation right. If you tell me the denominator is 'infinity', then see the next entry.
1 - .999... is 1/infinity, which is a number bigger than 0.
Mathematicians don't really use the noun 'infinity' very much, and when they do, it's usually as a shorthand for an idea that is relatively easy to define without the concept of infinity. They do use the adjective 'infinite' to describe lists and sets, and the adverb 'infinitely' to describe how the real numbers are divisible. While some intuitive ideas can be captured by using the idea of 'infinity' as a kind of number, you have to be very careful with it, and standard arithmetic doesn't usually work. But as an intuitive idea, anything that might be written as 1/infinity never behaves differently from the number 0. I can't prove that, however, since 1/infinity doesn't really mean anything. Using infinity as a number creates fallacies that even the doubters of .999... = 1 would disagree with.
.999... effectively equals 1, but it doesn't actually equal 1. It rounds off to 1. You can never really .999... going on forever because you can't live long enough to write it.
These are all really arguments that claim that .999... isn't really a number, and that you therefore have to stop writing it or round it off at some point. Look, either you allow the possibility that there could be infinitely many (note the use as an adverb!) decimal places or you don't. If you don't allow it, you'll have a lot of trouble with proofs that pi or square-root-of-2 can't be written using a number that has finitely many decimal places. If you do allow it, you have to be prepared to discuss what happens if they all equal 9 at the same time, and you have to discuss it without rounding or talking about when they end.
.999... = 1 if you allow limits, but not if you're just talking about numbers. The limit of the series isn't the same as adding up the numbers.
The evolution blog linked at the beginning of this post has an excellent discussion about this. The upshot is that once you admit that there's an infinite geometric series here (which you have admitted as soon as you merely write .999...), there is no difference between the limit and what the thing actually equals. They have to be the same by any defintion that is internally consistent.
Your fraction argument only works if I admit that 1/3 equals .333.... I don't think it does, so I don't think your arguement works. 1/3 can't be precisely expressed with decimal numbers.
Well, at least the people who argue this are not abusing notation, and they're not attacking me personally, and they understand that the assumptions have to be correct in order for the argument to be correct. So I'm giving some credit to this one. But unfortunately, .333... really does equal 1/3. If you think 1/3 equals some other decimal, you're going to have to tell me what it is. If you think that you can't express it with decimals, then remember that the very word 'decimal' itself comes from our base 10 number system, and that's a biological coincidence due to our 10 fingers. In a different base, 1/3 might be no problem, but 1/2 might be. Any problem results from notation, not from the concept of 1/3. Remember, notation is not math, notation just communicates math.
Okay, I really have to go now...I'M ON VACATION, PEOPLE!!!