*(note: this post has been closed to comments; comments about it on other pages will be deleted!)*

Okay, this is going to be my last post on the topic of how .999...=1, started in this post, and continued here. I will engage in more refuting, but I have begun to see how useless the refutations are because many of the "non-believers" (and I use that word jokingly because, as I wrote here, I don't really consider it a matter of belief) don't bother to visit the refutations page or don't read it if they do. This blog has gotten over 70,000 hits since the original post was on the front page of digg. The discussion there and at numerous other small forums makes it clear that the refutations aren't being read. There have even been meta-discussions on how this fact can get a warning from digg about containing inaccurate information, even though every knowledgeable source of information agrees with me. The only reasonable criticism I found on the digg site is that this doesn't belong in the news because the proof has been around for so long that it ought not count as news. Unfortunately, enough people seem to disbelieve it that even old news needs to be explained *just one more time* in the hopes that a few people will come to understand the math better.

But on to the refutations. This time, I will do them in decreasing order of sophistication of the argument. This puts the most reasonable first. (And again, I give full credit to some of the comments for their help in refuting.)

Variations on: 1 - .999... = 1/infinity, which is greater than zero.

I already discussed this a little bit, but there's more to be said. While there are some mathematicians who have formalized the notion of infinitesimals, those systems require a very careful formal extension of the real numbers to what they call the hyperreal numbers. I admit that I am not versed in this stuff because it is way beyond my understanding. However, I understand enough to know that they are not meant to *replace* the real numbers, but to *extend* them. The inventors of hyperreal numbers and the algebra of infinitesimals assuredly still agree that within the real numbers (a caveat that I have been very careful to include ever since this objection arose) .999...=1.

Within the real numbers, 1/infinity (which I will represent in this essay with the variable E, for epsilon, a standard name for very small (but not infinitely small!) numbers) behaves just like the number zero. As an example of this, I will refute:

Variations on: .333... doesn't really equal 1/3. It is just slightly less than 1/3 in the same way that .999... is just slightly less than 1.

(Now remember, I have to put some words into objectors' mouths, since they do not present their objections precisely, which is the whole problem. I apologize if I mischaracterize an objection somehow.) The only way to make this consistent, I think, is to claim that:

1/3 = .333... + E (remember that E is 1/infinity)

This would mean that:

1 = 3/3 = 3(.333... + E) = .999... + 3E

So if you object like this, tell me about this 3E number. Is it greater than E? If it is, then .999... isn't really *as close as you can get* to 1, is it? "Aha, Mr. Polymath! You yourself said that you can't use normal algebra on infinity, so it's still possible that the 3E number (the residue from tripling 1/3) might be the smallest possible postive number!" Yes, indeed, I said that. But if you're going to say that E and 3E are really the same thing, then you are saying exactly what I'm saying. If E and 3E are the same, then E is behaving *exactly like the number zero*. Or, if you don't like using algebra on this, we can just consider it like this: if .333... lacks something when it tries to be 1/3, and if .999... lacks the same thing when it tries to be 1, then either that lacking quantity is behaving just like the number 0, or .999... is actually *not* as close to 1 as you can get.

Variations on: Why do you insist that there be a number between .999... and 1? Can't .999... simply be the next number down from 1?

The real numbers have been **defined** in such a way as to guarantee that they are "dense". That means that between any two real numbers, you can always find another real number (infinitely many, actually, but certainly at least one). The easiest one to find is typically the average (x+y)/2. But whether you specifically refer to the density property or not, I would still challenge these objectors to tell me what happens if you add .999... and 1, and then divide by 2. I assume they would have to answer that 1+.999... = 1.999..., and then 1.999.../2 is what? .999...5? I covered in the last refutations essay how .999...5 is an abuse of notation and doesn't have any meaning because you can't tell me the denominator of the place value represented by the 5. Sooooo.....what is 1.999.../2? If it's something larger than .999..., then you've contradicted yourself since now .999... is not the "next number down" from 1. If you say it's less than .999..., then you've just completely ruined the idea of taking an average and ending up between the numbers you started with. And if you say it's equal to .999... then (using N for .999...) you're saying that:

(1 + N)/2 = N, which means (if you multiply both sides by 2):

1 + N = 2N, which means (if you subtract N from both sides):

1 = N, which is what I've been saying all along.

The only thing left for you to say is that the average of 1 and the "next number down" from 1 doesn't exist, since that average can't be less than, greater than or equal to .999.... And if the average doesn't exist, you'll have to deny the basic fact (called **closure**) that if you add two real numbers or divide a real number by 2, you'll end up with another real number. I suspect you're not really trying to deny the closure of the reals.

Variations on: .333... is an approximation of 1/3, but it's not actually equal to 1/3.

These objectors seem to think of the number .333... as having some independent existence from the number 1/3, and thus has the option of being the same or different from it. Actually, .333... is merely a notational description of what happens when you divide 3 into 1, which is the meaning of 1/3 (see below). If you do long division to determine the decimal equivalent of 1/8 (and I'm not going to demonstrate that here), you'll find that the tenths place contains a 1, the hundredths place contains a 2, and the thousandths place contains a 5, and the ten-thousandths place (and every place after that) contains a 0. We write that as 1/8=.125 with no problem (although it clearly could also be written as .125000...). When you try the same thing with the long division for 1/3, you'll find that the tenths place contains a 3, the hundredths place contains a 3, the thousandths place contains a 3, the ten-thousandths place contains a 3, and eventually you'll notice the pattern. *Every* place you might ask about contains a 3. How are we going to write that? We have decided to write it as .333.... That's all. It's just a notation. .333... *has to be* the same as 1/3 because it's merely a notation for the result of the dividing 1 by 3.

Variations on: 1/3 isn't really a number because you can't truly divide anything equally into 3 parts without a residue left over.

(Biiiiig breath here...) Wow. 1/3 isn't a number. This represents a pretty basic problem with the understanding of how numbers work, and explaining something this basic to someone who doesn't understand is probably futile, but here goes.

None of these objectors was really claiming that 1/2 isn't a number. They see that dividing 1 by 2 gives .5, but they claim that 3 doesn't really "fit" into 1 (they might be interested in my theory of remainders). While I can (sort of) see how you might give a special privilege to dividing something into halves that you don't want to give to dividing into thirds, do you really also want to give that special privilege to 1/5 (which also has a nice, easy decimal representation)? Or even 1/125? (Of course the reason some numbers make nice, easy decimals is the accident of our base 10 counting system. You can't easily take a third of a dollar, but you can take a third of 90 cents.)

The fact is, it's *so* possible to divide 1 by 3 that the very *meaning* of the symbol 1/3 is "the number you get when you divide 1 by 3". We *created* 1/3 to actually be the result of dividing 1 by 3, and we made up a symbol for it (namely, 1/3) that is supposed to remind you of it. If you don't think 1/3 is really a number, you'd better stop all those 2nd-grade kids who are learning about it.

(Whew, that's the best I can do at that one.)

(Actual quote from comments) "Is is NOT and WILL not EVER be exactly 1, because, by DEFINITION, it is LESS than 1. If you fail to understand this, then you simply fail to understand the definition of .9 repeating. I was sick in school, and never went on to uni and higher math, but

even I knowyou're waay wrong." (emphasis mine)

Of course, by now you'll know how I'm going to answer this person mathematically. The definition of .999... actually says that it is equal to 1. But that's not my point in bringing up this objection, which basically is just disagreement by intuition.

This (and other comments of which this is the most blatant) seem to imply the following:

"You mathematicians are just sooooo pointy-headed and obstinate that you have your little blinders on and you can't think outside the box. Out here in the real world, we know what's what, and we don't have to study your stupid Cauchy sequences or Dedekind cuts to know stuff about math. In fact, *not* having studied formal mathematics actually gives me *more* credibility, so you should believe what I say."

Along with this seems to come the notion that I am irresponsible for teaching this, and that people like me are what's wrong with education todayâ€”namely, a bunch of geeks who don't teach anything about the real world.

No, no, no, no, no. The thing that's wrong with education today is that so many people (not just kids) just want the shortcutting, life-applicable, "give me the bullet" answer without taking the time to *really think* about anything. Do you truly believe that in the thousands of years of brilliant, creative thinking about math, that no one noticed that .999... *seems at first* to be less than 1? The sun *seems* to be revolving around the earth, but we don't question the astronomers. Matter *seems* to be infinitely divisible, but we don't question atomic physicists. Consuming sugar *seems* to be the right thing to do when I feel that my blood-sugar level is low, but I didn't question my doctor when he (correctly) explained that this would (counterintuitively) just perpetuate the low-blood-sugar cycle. We trust professional chefs to make our restaurant food taste good, airline mechanics to fix our planes, architects to design our skyscrapers. All of these people have spent years and years studying their fields and they hold lifetimes of research and intuition in their heads. But mathematicians are just too dorky and clueless to understand a basic (although slightly counterintuitive) fact about representing numbers in our base 10 notation system? Give me a break.

I'm not claiming to be right on everything, just this fact about math. And I'm not claiming to be the greatest teacher in the world. But I'm not what's wrong with our education system. Your non-appreciation for rigorous math, and the centuries of thought that went into it, is what's wrong with our education system. Sticking your fingers in your ears and going La-La-La-La-I-Won't-Listen-To-Math-Geeks-La-La is what's wrong with our education system. Show some humility.

We write that as 1/8=.125 with no problem (although it clearly could also be written as .125000...).

Or .124999...

:)

Posted by: Dave S. | June 29, 2006 at 02:08 PM

Dave S.: And in fact, I've definitely seen my fair share of proofs involving decimal expansions that start with "assume that the decimal expansion of x doesn't end with infinitely many zeros"!

Posted by: glasser | June 29, 2006 at 02:46 PM

Here, here polymath. Now move on to some other interesting math, please.

Posted by: franky | June 29, 2006 at 03:07 PM

Well the thing I cannot understand is how someone can admit that they don't have an extensive education in mathematics and are very much a math novice but assert the fact that they know what they are talking about and that all well-educated mathematicians are wrong. I guess it goes along with the same ignorance that people use towards doctors when they believe that the doctor doesn't know what he/she is talking about.

Sigh...oh well

Posted by: Chip | June 29, 2006 at 03:09 PM

The finding a number between 0.99|9 and 1.00|0 really is the simplest way to argue the case.

I'll wait around to see if there are still objectors...chuckle.

Posted by: Ranbir | June 29, 2006 at 05:16 PM

We should clearly teach the controversy in our schools here. Who needs experts when we have democracy?

Posted by: Drealoth | June 29, 2006 at 07:59 PM

"but even I know you're waay wrong"

That's hilarious! Good job btw, polymath. You tell 'em.

Posted by: Le Bohemian | June 29, 2006 at 08:06 PM

I entered comments way back. Just for the record - for you as a math teacher -from me as a former math teacher: I still think that the problem is caused because they never learned the difference between number and numeral. Making that distinction explicit makes a lot of arithmetic and algebra much easier.

Posted by: karl | June 30, 2006 at 12:28 AM

And, by the way, since the standard way to change a fraction into a decimal is with the standard long division algorithm, I point out the following: it is just an algorithm, it can be modified. Consider calculating the decimal for 9/9. As you did above: 9 goes into 9 one time - put a 1 above the line, then 1 goes inot zero zero times, put a zero after the decimal point - that's standard. But consider this, suppose you don't notice that 9 goes into 9 one time, so you look at how many times 9 goes into 90. Since you can only use a single digit, it goes 9 times. Put a 9 in the first place to the right of the decimal point. 9 times 9 is 81. 90 - 81 = 9. Bring down the next zero. 9 goes into 90 9 times again, and again and again. What do you get as the quotient - .999...

Cute?

Posted by: Karl | June 30, 2006 at 12:38 AM

Good one Karl, I had thought about that too. In fact, that would work with any long division problem that comes out to a whole number. You could always end up with one number down, followed by .999...

Posted by: Le Bohemian | June 30, 2006 at 06:25 AM

Karl,

I had noticed that too. The reason I didn't post it is that it's not the "standard" version of long division, so I suspect it would draw criticism from people who don't see that it's correct. But it is an excellent point.

Posted by: Polymath | June 30, 2006 at 08:41 AM

I came over from Mathforge to see the brouhaha. It reminds me of a humbling experience I had a while ago when I said to one of my engineering friends that mathematics was largely free of pseudoscience and crackpots. Unfortunately I had spoken much too soon and was immediately lead to a round-trip tour of the kookiest mathematics sites on the internet. The most amusing was definitely the site where the claim was made that "mathematicians are biased to the positive" and therefore -1 * -1 should be equal to -1. After rolling around laughing for several minutes I was forced to write an (quick) axiomatic development of the real number system. The strange thing that I keep encountering is that while to us it seems completely obvious that -1 * -1 = 1 (it is really forced on us by the axioms chosen for the real number system - and besides, could you imagine what it would do to analysis if it were suddenly changed ;-)) it really isn't very obvious to the lay person. In explaining some of what I do to other people I frequently encounter people who are quite literally still stuck in Zeno's paradoxes. The limit process is an utterly incomprehensible concept to many people. The old example of walking across a room half the remaining distance at a time still stumps people. I've found that people simply cannot disconnect what they believe to the rigour of mathematics. To them the mathematical concept of infinity seems wrong and obscure. I really applaud your blog for taking this often controversial topic on and bringing some clarity to people. For the insult slinging people out there who believe this person to be lying about the topic, let me make a recommendation. Pick up "A Course in Pure Mathematics" by Hardy, "Foundations of Analysis" by Landau and then when you've read both in detail, come back in a few years and try to argue your point again. Until then, trust the mathematicians.

Posted by: Darren | July 01, 2006 at 01:32 AM

Hey, that's a good refutation. I would really like to know more about the hyperreal numbers, but I'm currently just researching infinite set theory... so I suppose I'll get to that in due time. This whole .999... = 1 "debate" has really gotten me interested in math again. Thanks polymath!

Posted by: mark | July 02, 2006 at 01:00 AM

Just a quick note about hyperreals -- they are a field extension of the real numbers, which in particular means that if .999...=1 is true in the reals, it's also true in the hyperreals. In particular, the algebraic proof (and I think the infinite series proof) holds independent of what field extension of the reals you consider.

Of course, I'm not well-versed in hyperreals, as they're far outside my research area; I just know a fair bit about field extensions.

Posted by: Davis | July 03, 2006 at 03:09 PM

Egad. Wow. Some people's ability to convince themselves of things beyond all evidence to the contrary is truly stunning. It's hard for me to believe that that many people posted to argue against the fact that 0.999.... equals 1--I guess there are things about infinity that are just too counterintuitive for some people to be willing to accept them. (And, of course, the irony is that some of those commenters accused you of not understanding infinity...wow.)

Anyway, it's my first time visiting this blog--came here from your recent reply to a post in Good Math, Bad Math--but I'll probably be by again. And you have my sympathies regarding your having to deal with so many nutcases making bad arguments against basic mathematics...

Posted by: Alun Clewe | July 03, 2006 at 07:04 PM

Looks like I've kicked the debate off again on the Yahoo answers website . Link....

https://uk.answers.yahoo.com/question/index;_ylt=AhYEX19vsKLDljTED2owZX0gBgx.?qid=20060706075109AAuYuXC

Posted by: hotterthanthesun | July 06, 2006 at 10:17 AM

Also, the best answer so far (from the YAHOO site):

Question:

"Does 0.99999... recurring = 1????"

Answer:

Yes.

Try finding a number between the two!

Posted by: hotterthanthesun | July 06, 2006 at 10:27 AM

This is an interesting discussion, but is, at its heart, irrelevant. Let me opine on the reason why.

The basis for argument that .999...=1 is sound, if you ignore the basic flaw in any argument that tries to express fractions in decimal equivalents. A fraction is essentially an external reference into some form of unit. Such as 2/3 of a block of cheese. The size of the resulting fraction is only rational when viewed in relation to the assigned unit.

Yet still, we persist in trying to assign decimal meaning to a fractional representation. This is erroneous. Yet, for as long as we do so, we must then accept that .999...=1 for all of the well reasoned approaches outlined by Mr. Polymath.

But it still "sits wrongly" with those who understand that there is an underlying flaw ... but just can't put their finger on it. They understand that .333... does not equal 1/3, nor do any of the other fractions equate to their most common decimal expressions.

This problem reares its ugly head in other ways. When I was teaching network engineering to young EE majors, this problem frequently appeared in computations designed to originate in one type of unit and conclude in another. The students frequently forgot to manipulate the units as an entity in their own right. Which underscores my previous point on fractions being an external reference, not something at absolute as a real number on a number line. (Note that this becomes even more complex when working with electrical power logarithms, deciBels, and half power points.)

A final example would be the common representation of Internet Protocol addresses. A simple 32 bit binary number used in a logical AND function with a 32 bit binary mask to form the truth table resultant of the ANDed function, which is then applied to the primary address for the purpose of "local reading" of the delivery address. While logically this is a simple solution, and equates very well to an analogy of the U.S. Postal Mail system ... we just couldn't leave it alone! We had to first break these 32 bits up into four sections of 8 bits each (since we already had the "Byte" concept going ... but that could be another post) and called them Octets (even though they were not Octal). We then further compounded this confusion by representing each of these octets as a decimal expression! Now, try to teach this "simplification" to a room full of supposedly intelligent college seniors and post-grads. If we teach our mathematicians, scientists, and engineers to think in a logical and empirical manner ... why do we continue to try to make math and science "neater" by tying up perceived "loose ends". There are no loose ends ... merely theorms without (as yet) sufficient proofs.

As the saying goes, "God doesn't throw dice with the Universe." But I think this is because the Universe would clean God out of folding money!

Until we get the neat-niks out of math and science, we will continually be faced with issues such as these.

Irrelevant as they may be.

Posted by: Jim Byrom | July 06, 2006 at 10:01 PM

"They understand that .333... does not equal 1/3, nor do any of the other fractions equate to their most common decimal expressions."

Guh? In what sense do you mean that?

A fraction is a rational number. The rational numbers are a (dense) subset of the real numbers. All real numbers have (not necessarily finite) decimal representations. And representations in any other base, for that matter. Where's the "flaw"?

Posted by: Davis | July 09, 2006 at 07:42 PM

Here's my opinion on this.

0,999.... is not equal 1. Why? Because it defies common sence. No matter how many nines you put after that zero, it's not gonna equal 1.

However, the difference between 0,999... and 1 is infinitely small, which is mathematically equal with 0.

Therefore, 0,999... = 1

Posted by: angry_russian | July 10, 2006 at 03:31 AM

"Yet still, we persist in trying to assign decimal meaning to a fractional representation. This is erroneous."

A decimal IS a fraction! That's what it is. 0.5 is 5/10. 0.125 is 125/1000. They are one and the same.

By the way in other number systems, you have the same kind of phenominon. In octal, .777... = 1. In hexadecimal, .FFF... = 1. In binary, .111.. = 1.

Posted by: Shawn | August 01, 2006 at 07:45 AM

Any argument that opens with the position from "common sense" is not by any stretch of the imagination a proof. A proof of something one way or the other will trump any common sense argument 100% of the time.

Posted by: Mark | August 02, 2006 at 02:04 PM

I. Will. Never. Ever. Again. Make the first comment on one of your math posts.

Ever.

Posted by: caitmcq | August 06, 2006 at 11:23 AM

I can't comment on Robinson's non-standard analysis as I have never studied it but at least in the non-standard analysis of Detlef Laugwitz 0.99999... does not equal 1! There is a very readable explanation as to why not as an apendix to the doctoral thesis of Detlef Spalt "Vom Mythos der Mathematischen Vernunft". Unfortunately only available in German. Laugwitz defines infinitesimals simple by introducing the arithmetic operations for use with them in analogy to the introduction of 'i' and the complex numbers thereby creating a perfectly valid calculus of infintesimal numbers in which 0.999... does not equal 1.

Posted by: Thony C. | September 11, 2006 at 12:13 PM

i think he is right. the evidence given that .999repeating = 1 is much more convincing than the arguments that its not.

Posted by: Luke | November 17, 2006 at 08:44 PM