*(note: this post has been closed to comments; comments about it on other pages will be deleted!)*

Okay, so I'm still on vacation, but it's clearly time for me to step back into this. Thanks to the people at scienceblogs, namely Goodmath/Badmath and the EvolutionBlog for taking up my cause. And thanks very much to the people in the comments of my other posts for doing their best to explain. Some of my explanations here use some of their ideas.

Let's do this by classifying the problems people have with my posts, from the most ridiculous to the least (roughly):

You must be a public school teacher, and I fear for your students. You don't know enough math to teach it. Stop filling their heads with nonsense.

Wow. I actually teach at a private school. Some of my former students have gone on to get perfect scores on SAT's, study advanced math at top-level universities, and place well at the national level in math competitions. I have B.A. in math from a top-20 school. All the information online (you can start here and here) agrees with me. In addition, while I know that there are public high school teachers who are not very good with math, all of the ones I know who teach at, say, the pre-calculus level or higher also agree with me. Please don't insult the entire profession just to discredit me. If you're that worried, go to graduate school and study math, and teach it yourself.

.999... is clearly less than 1, but mathematics isn't advanced enough to handle infinity, so you can't prove it. My intuition isn't flawed, math is.

This is another one that's so wrong that I barely know where to start. Historically speaking, this debate is quite old. In the 19th century, this apparent paradox (.999... is clearly less than 1 vs. .999... equals 1) was addressed by some great mathematicians (Cauchy and Dedekind, among others). In particular, they formalized the notion that the real numbers are infinitely finely divisible, and in their formulations, all arithmetic operations worked the way they seemed like they should. Using their formulations, all proofs result in .999...=1. Those formulations are discussed more at the wikipedia site linked above. Mathematics has been handling infinity well (using definitions that don't even require the use of the word 'infinity') for at least 100 years now. Until you've studied that work, you might want to be careful about saying that math can't handle infinity.

Variations on: 1 - .666... = .444..., but .999... - .666... = .333..., so 1 and .999... can't be equal.

You'd think that people trying to argue with mathematicians would at least check their work:

1 - .666... = .444...? If that's true then to check it, I should be able to add .666... and .444... and get 1. But .6 + .4 is already 1. So the sum of two larger numbers can't also equal 1. The check fails.

Variations on:

10x = 9.5

- x = .5

9x = 9So x = 1 and also x = .5. SEE!! I can use your stupid method to prove that 1 = .5!

If x is .5, then 10x is 5, not 9.5. Your equations are unrelated, so *of course* you can prove something false.

(A lot of birds with one stone):

10 * .999... isn't 9.999..., it's 9.999...0

1 - .999... is .000...1

2 * .999... is 1.999...8

This is a common mistake among my students. You're mistaking notation for mathematics. Notation is *not* mathematics. Mathematics is the study of ideas about patterns and numbers, and we have to invent notations to communicate those ideas. Just because you *can write* something that looks mathematical, that doesn't imply that what you wrote *has meaning*. The number written as .999... has meaning:

9*(1/10) + 9*(1/100) + 9*(1/1000) + ...

Every 9 in the list means 9*(1/*something*). But .000...1, for example, is an abuse of notation. It doesn't correspond with any meaning, so it doesn't communicate anything. If you think it means something (and putting 1 at the END of an ENDLESS list of zeros shouldn't mean anything), you're going to have to tell me what's in the denominator of the fraction represented by that decimal place. If you can't tell me that denominator, you're not using the notation right. If you tell me the denominator is 'infinity', then see the next entry.

1 - .999... is 1/infinity, which is a number bigger than 0.

Mathematicians don't really use the noun 'infinity' very much, and when they do, it's usually as a shorthand for an idea that is relatively easy to define without the concept of infinity. They do use the adjective 'infinite' to describe lists and sets, and the adverb 'infinitely' to describe how the real numbers are divisible. While some intuitive ideas can be captured by using the idea of 'infinity' as a kind of number, you have to be very careful with it, and standard arithmetic doesn't usually work. But as an intuitive idea, anything that might be written as 1/infinity never behaves differently from the number 0. I can't prove that, however, since 1/infinity doesn't really mean anything. Using infinity as a number creates fallacies that even the doubters of .999... = 1 would disagree with.

.999...

effectivelyequals 1, but it doesn'tactuallyequal 1. It rounds off to 1. You can never really .999... going on forever because you can't live long enough to write it.

These are all really arguments that claim that .999... isn't really a number, and that you therefore have to stop writing it or round it off at some point. Look, either you allow the possibility that there could be infinitely many (note the use as an adverb!) decimal places or you don't. If you don't allow it, you'll have a lot of trouble with proofs that pi or square-root-of-2 can't be written using a number that has finitely many decimal places. If you do allow it, you have to be prepared to discuss what happens if they all equal 9 at the same time, and you have to discuss it without rounding or talking about when they end.

.999... = 1 if you allow limits, but not if you're just talking about numbers. The limit of the series isn't the same as adding up the numbers.

The evolution blog linked at the beginning of this post has an excellent discussion about this. The upshot is that once you admit that there's an infinite geometric series here (which you have admitted as soon as you merely *write* .999...), there is no difference between the limit and what the thing actually equals. They have to be the same by any defintion that is internally consistent.

Your fraction argument only works if I admit that 1/3 equals .333.... I don't think it does, so I don't think your arguement works. 1/3 can't be precisely expressed with decimal numbers.

Well, at least the people who argue this are not abusing notation, and they're not attacking me personally, and they understand that the assumptions have to be correct in order for the argument to be correct. So I'm giving some credit to this one. But unfortunately, .333... really does equal 1/3. If you think 1/3 equals some other decimal, you're going to have to tell me what it is. If you think that you can't express it with decimals, then remember that the very word 'decimal' itself comes from our base 10 number system, and that's a biological coincidence due to our 10 fingers. In a different base, 1/3 might be no problem, but 1/2 might be. Any problem results from notation, not from the concept of 1/3. Remember, notation is not math, notation just communicates math.

Okay, I really have to go now...I'M ON VACATION, PEOPLE!!!

"Showing that different bases can represent these problem fractions without repeating decimals would only help my case, not hinder it."

Can anyone explain this to me? Adam?

Posted by: Steve Albright | June 22, 2006 at 04:32 PM

My turn....

In my understanding, the limit to 1 is not exactly the same as 1. In a way, 1/3 does not equal 0.333... if you are not able to understand what is the meaning of that "...".

0.9999... does not equal 1, but is an infinitely precise approximation of the value. Here, "infinitely" is as prĂ©cise a value as you want. In fact, 1 is the absol...

..

.

damn, I realise that it doesn't matter, English is not my native language and I will not be able to share my view on the subject.

In a way yes, 0.9... vaguely equal 1. But you have to understand again that a limit to 1 can be a more or less exact 1. This is the goal of the limit, and of all the infinitesimal calcul after all.

Posted by: edomaur | June 22, 2006 at 05:57 PM

I think that those of you trying to explain .999...=1 are taking the wrong tack. the problem is lack of understanding of notation, or of the difference between number and numeral. The numeral at question is ".999,,,". It includes the "..." which has a specific DEFINED meaning. That's what all those objectors keep missing - the meaning of it - which is that the 9's go on FOREVER, not just for 1000, or a million, or something. So if you multiply that number by 10 (which has the effect of moving each digit one place to the left) you get 9.999... And there are still as many 9's to the right of the decimal point as there were before. Now do the arithmetic: x=.999... 10x=9.999... So 9x = 9 and voila.

Posted by: Karl | June 22, 2006 at 08:10 PM

To add to what Karl mentioned, for those who doubt 10 x 0.33... = 3.33..., due to the reasoning that "it's missing a decimal place!", try this little thought:

10 x 1/3 = 10/3

"10/3 = 3 + 1/3" when converting the fraction into a mixed fraction.

We all must have to agree that 1/3 = 0.33... due to simple long division (to say otherwise is just too screwed up for words). So what do we get after a simple substitution?

10 x 0.33... = 3 + 0.33... = 3.33...

Posted by: Monimonika | June 22, 2006 at 10:43 PM

Adam's busy right now. He's over at the Ministry of Love trying to convince Winston 2+2=5.

That's right Adam. I belittled you. Now tell me what 1/3 equals in decimal notation.

Posted by: JD | June 22, 2006 at 11:47 PM

JD, that was hilarious! Like, incredibly. Hey, and Steve asked you a question, Adam:

"Showing that different bases can represent these problem fractions without repeating decimals would only help my case, not hinder it."

What does this even MEAN? Did you even think about it before you wrote it???

They say people think twice as fast as they speak, so that they can think TWICE.

Posted by: Timothy | June 23, 2006 at 02:42 AM

I really cant see why 1/3 is .333333... The back of my head tells me that it is infinitesmally larger than 0.3 and that 2/3 is the same w.r.t. 0.66666666... so their sum should be infinitesmally larger than 0.99999999... which is equal to 1.

Posted by: Abhinav | June 23, 2006 at 05:37 AM

Abhinav, can you do long division? You know, actually divide 1 by 3 by paper and pencil (without an electronic calculator)? After getting as far as 0.3 for the answer you go into an infinite loop of having a remainder of 1. Thus the repeating 3s at the end there.

No one should really be able to dispute that, unless you go into using different bases other than the standard base-10.

Posted by: Monimonika | June 23, 2006 at 06:14 AM

Re: 1/3 in a different base.

Writing things in different bases doesn't change the number at all. They are just ways of representing the number. Rather than each digit being a power of 10, it's a power of some other number. Other than that, base-3, base-16, or base-60000 are all the exact same. There is no number that is represented by one that can't be represented in another.

So, for example, in base-3 1/3 is simply .1. In base-10 1/3 is written as .333... In base-16 it is .555...

These are all the exact same number. The difference in representation is simply because of the way long division works. Any unit fraction (1/X) will have an infinite decimal if the X at the bottom doesn't evenly divide into the base that you're writing it in.

This isn't a failure of math, or an indication that the number isn't real. Again, it's simply a consequence of long division.

Posted by: Xanthir | June 23, 2006 at 11:21 AM

Very funny and interesting comments (from an "ex-mathematician" point of view).

Posted by: chat-loupe | June 23, 2006 at 04:19 PM

I think the issue here is whether the limit is reached at infinity or not. It's obvious that .999... approaches 1 as n approaches infinity, but when n = infinity*, does .999... = 1? I haven't yet been able to find any math resources that adequately deal with this subject.

*- if you doubt this, ask yourself how many repeating 9s there are

Posted by: mark | June 23, 2006 at 05:58 PM

mark,

What happens is that n doesn't approach or reach infinity. It increases without limit--or increases infinitely, if you want to use the word in there somewhere--and never does reach anything. It just keeps on going, through all the integers.

In the same way, it's not that the limit is reached, but that there's no lower number than the limit which isn't exceeded by the continuing decimal. It's the least upper bound of the continuing decimal--it's the smallest of all the numbers which are greater than any term in the decimal.

"To infinity, and beyond!" is a cool slogan from a really great movie, but it's not someplace you can go with numbers.

Posted by: adamsj | June 23, 2006 at 09:37 PM

First, I want to say that I know (like all mathematically educated folks I know) that .999... does, in fact, equal the number 1.

However, Mr. Polymath, I see that a lot of the people who fail with this concept are getting hung up on notation and decimals VS integers.

So, I have an alternative question that I hope might eliminate these hang ups, but I'm not sure if it is real...

It seems to me that the conjecture can be restated as follows:

The number NINE repeating is EQUAL to the number ONE with ZERO repeating.

In other words:

999... = 1000...

is an equivalent statement to:

.999... = 1

Is this true?

Posted by: manyoso | June 23, 2006 at 10:49 PM

The problem with that is that the "numbers" 999... and 1000... do not exist. By saying 999... = 1000... then you are essentially equating infinity as a number (by multiplying infinity to both sides of the equation). This is simply invalid.

Posted by: Chip | June 23, 2006 at 11:38 PM

adamsj,

Thank you for your reply. I can begin to see why there is debate over this issue now. Perhaps the hinging point in the debate is "how do you define a number?" From what I understand (and I'm not an expert at math), if you have two numbers that are not equal, you should be able to draw a line between them and plot points on this line. There are no points between .999... and 1, so does that make them the same number?

Posted by: mark | June 23, 2006 at 11:39 PM

manyoso:

999... and 1000... are not defined, so a statement like

999... = 1000...

is neither true nor false, it just makes no sense.

If you think they have a meaning (and let it be just your own one), give me some examples how they relate to "normal" numbers.

Like, what is 999... + 23?

If you can't answer that, it shows that your concept of 999... doesn't fit into the world of integers/rationals/irrationals (as "+" is defined for all of them).

Of course you can make up a new calculus of such numbers (maybe there even exists one), but that is another "language" then, and transferring statements from classical mathematical language over isn't possible.

Posted by: schnubbi | June 24, 2006 at 04:00 AM

mark,

That's a good way to think about it. You're getting at the heart of it now.

Posted by: adamsj | June 24, 2006 at 06:52 AM

Polymath, you wrote:

"'.999... = 1 if you allow limits, but not if you're just talking about numbers. The limit of the series isn't the same as adding up the numbers.'

"The evolution blog linked at the beginning of this post has an excellent discussion about this. The upshot is that once you admit that there's an infinite geometric series here (which you have admitted as soon as you merely write .999...), there is no difference between the limit and what the thing actually equals. They have to be the same by any defintion that is internally consistent."

Well, I'm glad that my question was only slightly more ridiculous than the claim that 1/3!=0.333... Personally I find that latter concept a good deal more ridiculous than the concept of the limit of the sum of a geometric series. But tastes differ.

I have to say in my own defense that what I actually did was present a question: If the limit of the converging sum is 1 as n approaches infinity, then does n ever reach infinity?

I now believe I have an answer to that question, based on comments here: that is, the question is badly formulated by the use of a poorly defined concept: infinity.

As I understand it, the infinity symbol is used simply to represent an unbounded series. That is, if we define some value X, and X is allowed to increase without limit, we say that "X approaches [infinity symbol]."

Clearly, then, the concept of "infinitely small" has no meaning in mathematics. "Small" describes the size of some measurable quantity, and as such it can't be represented by a negative number. We can say that "-1 is less than 0" and make mathematical sense, but in actual measurement the smallest possible number is 0.

What I'm trying to do here is illustrate the difference between the words we use to think and the concepts used in mathematics. I believe that disconnect is what underlies all of the intuitive discomfort with the claim "0.999... = 1." Am I making sense so far, teacher? :)

OK then, when we say "0.999... = 1" we are making the claim that there "is no difference" between those two numbers. The mathematical expression of that idea would be: 1 - 0.999... = 0

I think that writing it like that may help us to bridge the gap between philosophy and mathematics. Because if we actually try to perform that operation, we run up against the fact that 0.999... is an infinite expansion. That is, the number of 9s to the right of the decimal point is unbounded. IT DOES NOT END. "Infinity" is not a number of any sort: it's simply a way to express that concept of an unbounded series.

Therefore, if you actually try to perform the operation "1 - 0.999..." then clearly the result is 0.000..., or an unbounded series of zeros. And I don't think anyone could challenge the claim that

0.000... = 0

Thus we see:

1 - 0.999... = 0.000...

or

1 - 0.999... = 0

In plain English, that equation means "There is no difference between these two numbers. They are literally the same."

Do you think explaining it that way might be helpful?

Posted by: Paul S | June 24, 2006 at 04:52 PM

I had some more thoughts as I wrestled with this idea in my head today:

1) If .999... < 1, it is not possible to be plotted on a number line. The number would not exist. It could not be one by our definition. It could also not be less than one, because any value less than one will be less than .999...

2) If .999... < 1, .999... would not be a real number, and if it was a real number (hypothetically), .000...1 would also have to be a real number.

Posted by: mark | June 24, 2006 at 08:44 PM

I'm a mathematician, and I love your original post. Very thoroughly explained (though I'm not surprised that you still have a lot of people who don't believe it--but they just don't really understand the decimal number system)...

Posted by: Lsquared | June 24, 2006 at 09:30 PM

Haven't seen anyone attack it from this angle, so I'll offer my two cents. If we switch our counting base to one that plays easily with one third we change the discussion entirely:

Base15 (1,2,3,4,5,6,7,8,9,a,b,c,d,e)

10 = E+1

1/3 = .5

2/3 = .a

3/3 = 1.0

These add up pretty and intuitively now.

However:

1/7 = .222222|2

6/7 = .cccccc|c

7/7 = .eeeeee|e = 1.0

Demonstrating the symbology lacks the

extensibility to illustrate the math,

but the math is always correct. we

can change symbology to demonstrate one

but we lost the ability to represent the other.

Posted by: Jason | June 25, 2006 at 11:29 AM

ok, I just noticed that in the end of this post PolyMath did touch on the base10 issue, but I still like the way I demonstrated it :-)

Posted by: Jason | June 25, 2006 at 11:42 AM

I found a good article at abstractmath.org:

http://abstractmath.org/MM/MMRealNumbers.htm#_Density_of_the

Posted by: mark | June 25, 2006 at 08:42 PM

I don't know what happened to my earlier reply except maybe because I am still new to Typepad.

Anyway, .999999|9 doesn't equal 1, if it did then subtracting .999999|9 from 1 would be 0.

1.0 - .999999|9 = 10^-(infinity)

Although a like your proofs, you fail to realize the infintesimally small rounding errors you introduce. Geometric series may produce limits approaching 1 but again this is only approaching 1. Your example of 3 decimal 1/3s adding up to .999999|9 ingores that .333333|3 and .666666|6 are only the closest approximations of a perfect fraction (which is why fractions are prefered over decimals)

In practical mathematics .999999|9 is 1 but not in pure math.

Posted by: Lzygenius | June 26, 2006 at 01:01 AM

Lzygenius,

I can see where you're coming from. So the implication is that .999... is not a real number, correct? Because an infintesimal is not a real number either.

-----------------------------

I have another question on this topic:

Does .000...1 behave any differently than 0, in any circumstance? In the same vein, does .999... behave any differently than 1, in any circumstance?

For example, what's .000...1 * 5? or .000...1 + .000...1?

Posted by: mark@ridlen.net | June 26, 2006 at 11:47 AM