« Weeks 11, 12, 13 | Main | Weeks 14, 15, 16, and finals »


Pablo Gomez

I agree that "Proffessor David S. Schaul" seems to be trying too hard to look like he knows more than the rest of us mere mortals, but he's somewhat right.

The purpose of the decimal representation of numbers is to be able to write any real number with a decimal representation of the form: (+/-)n.a0a1a2a3a4a5... where n belongs to the set of natural numbers and ai belongs to the set {0,1,2,3,4,5,6,7,8,9} for every i that belongs to the set of natural numbers. But how do we know that every real number has a decimal representation and that there is a real number for every decimal representation? We need to establish a bijection between the set of real numbers and the set of decimal representations. However to be able to do that we need to exclude all decimal representations where infinitely many consecutive digits 9 occur. Otherwise, as you yourself proved, we would have at least one number which can be represented by two different decimal representations (since .999... and 1 are both representations of the same real number) and therefore we cannot establish a bijection between the real numbers and the decimal representations. If you exclude all decimal representations where infinitely many digits 9 occur then a bijection can be established between both sets. This bijection and the proof that it is indeed a bijection is fairly common and can be found in many Analysis or Calculus books, in particular I suggest Hans Sagan's Advanced Calculus book.

In formal Mathematics saying that .999... = 1 doesn't even make sense. But showing that .999 = 1 to a classroom is always fun to do, and unless you're teaching it to a group of Mathematicians and they believe you, it doesn't really matter.


now, see...here's someone who disagrees and can form a coherent argument that we can talk about.

i agree every real number has a decimal representation (which may or may not be finite), and i agree that every decimal must represent a real number, but i disagree that it must be a bijection (one-to-one by definition). if there are several representations of a real number in the decimal system, i don't see how that causes a problem.

we already allow for multiple representations (sort of) when we agree that .24 and .2400 represent the same number.

i suspect that we agree that .333...=1/3, and i suspect that we agree that .333... is a decimal-system representation of the infinite geometric series (3/10 + 3/100 + ...). well, i would suggest then, that multiplying that series by 3 (which of course distributes to each term) gives a series that's most clearly represented by .999....

the notation is merely being used to represent that notion. it is interesting precisely because the number that is represented by .999... (and thus by an infinite geometric series), according to the formal definition of such an infinite sum, does actually equal 1. that's how it does make sense in formal mathematics.

Ilya Birman

Thanks, that’s funny indeed :-)


Please dont censor your mockery.

I began reading your blog because the particular entry in question was emailed to me by a friend, and I continue reading out of admiration for how you have so accurately and eloquently replied to all arguments and replies.

As a math teacher (not nearly of your caliber, i teach algebra 1 to high school students) I feel that sometimes mockery is an appropriate response.

The Science Pundit

I agree with TL. Sometimes the frustration reaches such levels that mockery is the best option. A couple of weeks ago, I wrote a post (http://thesciencepundit.blogspot.com/2006/11/0000-0.html) where I compiled comments from a few sites (including your own), and cosmetically modified them to argue that "0.000... ≠ 0". I felt it was necessary.

I also agree with you that Pablo is a rare dissenter (in that he's coherent). The key (or flaw) with his argument lies in the bijection between the two sets he defined. But proving bijections (or isomorphisms) is a discipline that most mathematicians are trained in. I think disproving his conjecture would actually make quite an interesting post (to a "mathophile"). Maybe I'll give it a go. ;-)

Pablo Gomez

The problem with allowing several representations for the same number is that it makes everything a lot harder because then having a decimal representation is not enough to identify a number, you also have to see which number it represents. Then every time you have an equality you can't verify if it is true just by comparing the decimal representations of both sides, you need to see if the number they represent is the same or not. The prime example of this is obviously "0.999... = 1". This becomes particularly important in proofs by contradiction where often you arrive at contradictions such as 1 = 2, and usually just go ahead and say that it's obviously a contradiction. However if we don't care if there is a bijection between the real numbers and the decimal representations, then every time we arrive to something like 1 = 2, we would need to show that 1 and 2 represent different real numbers. If there is a bijection then 1 = 2 is obviously a contradiction because their decimal representations are different, and the bijection says that there is only one decimal representations for each real number.

As for the rest of you comments, a decimal representation is always infinite, however many times it has infinitely many consecutive digits 0 (in other words, it ends with an infinite tail of 0s). With this in mind it is clear that .24 and .2400 are indeed the same decimal representation. In this case it's just a matter of notation, when a number ends with a infinite tail of repeating digits we usually only write the repeating digits once and draw a small line above the repeating digits, however since for all arithmetic purposes infinite tails of 0s don't matter, then we just don't write them.

As for the infinite series argument, in a formal context you'd have to prove the fact that the series you suggest is indeed a decimal representation for 0.999... . Intuitively it might seem like it does, but addition is defined as an operator between two numbers and you can't just add up an infinite amount of numbers by adding them two by two because you'll never stop adding. The only way of adding an infinite amount of numbers is by using a limit, and as you yourself have stated the limit of that particular series when it approaches infinity is 1. Then, it's decimal representation is 1.

Again, I would like to say that my arguments use VERY formal Mathematics. In real life, this amount of formality is probably unnecessary and saying that .999... = 1 is good enough.


"VERY formal" math, huh? LOL! Quit pretending.

I love the random bijection rule. Thanks for a laugh. Now, go back to playing with your calculator and let mathematicians take care of math.


As a teenager, none of the standard arguments convinced me (they do now; I was pretty cocky), so I came up with one of my own.
The problem with those arguments is that they all talk about multiple representations for the same number. An even more serious problem is that if 0.99[etc] didn't equal 1, then you would have different numbers with the same representation.

It's not formal, but the argument goes like this:

Consider the decimal numbers 0.3 and 0.299999[etc]. Express them both in binary. Try to distinguish between them. The result follows.


Pablo Gomez is not arguing that .999... = 1 is false. He is arguing that .999... should not be a number so that certain proofs have fewer steps. TheSciencePundit, don't try to "disprove" his bijection. All he's saying is that every nonnegative real number has a unique representation as m+sum(a_k*10^(-k),k=1..) for m an integer and a_i in {0,1,2,3,4,5,6,7,8,9} such that there is no N where a_n=9 for n>N. You'll find this is true.

It's interesting on the face of it, but I think it falls apart very quickly. Simply note the following lemma:

-In the canonical decimal representation of a number (this is the one everyone else uses, not the one without .999... in it (don't include this part in the lemma)), if there is a smallest N such that a_n=9 for n>N, then that number is represented by m_1+sum(b_k*10^(-k),k=1..) where m_1=m if N>0 and m_1=m+1 if N=0, b_k=a_k for kN. [insert bijection proof here]

Redefine = for the canonical decimal representations of numbers x and y to mean "represent x and y in bijection-friendly-world; AND digit-wise comparison". Huzzah!

Which is more elegant? That implicit lemma or leaving out "obviously" valid sequences from your set of representations? That's the question you're really arguing, and it's a completely different question than "in the canonical representation, is .999...=1?" Hopefully you aren't confusing anyone who doesn't know any better.

Pablo Gomez

It's nice to see someone who understands my position. Most people don't even consider the possibility of defining the decimal representations to exclude .999... so that the bijection is possible. Personally, I like using the bijection simply because it seems more elegant to me, but that's completely subjective.
GreedyAlgorithm's comment made me realize that maybe I wasn't being clear enough. I'm not trying to disprove that .999... = 1, I'm simply trying to show that there is another way to define decimal representations where saying that .999... = 1 does not even make sense because .999... is not a valid decimal representation. Also, you should probably keep in mind that all this is only to say that "Proffessor David S. Schaul" was not wrong, he just didn't say that he was working with this definition of decimal representations. My guess is that "Proffessor David S. Schaul" is not a native English speaker (which explains his weird English) and that in his country most mathematicians prefer this definition for decimal representations and he just assumed that Mathematicians elsewhere did the same.


I too have enjoyed your defense of the frustratingly un-intuitive claim that .999... = 1. I'd like to reccommend the book "Mathematical Cranks" by Underwood Dudley. A humerous treatise on people who stubbornly believe mathematically false things about mathematics. It sounds like it would go well with your current mood.



upon further reflection, i see the point that you and greedy were making. it's very rare that someone comments like you did about .999...=1 who is NOT trying to say that .999... is less than 1. while i don't share your opinion that a decimal representation ought to be unique, you're clearly arguing that any calculation that yields .999... is also a calculation that yields 1. so i don't think we seriously disagree about that.

i do think, though, that you have attributed too much subtlety to the comment that inspired this post. googling the name David Schaul does not give any hits for any professor of any kind, and his claim sounds too much like the (literally) hundereds of other claims that .999 can't equal 1. i think he's just a hack trying to pass him self off as a math professor.


thanks for the recommendation; i very well may look that up!


Thinking about this problem briefly I came across an argument that seems to put the 1=0.99… into doubt. Using the geometric expansion given in your first post:

0.99…=9*sum(1/(10^(n+1))) as n goes from 1 to infinity.

As you said, this isn’t really up for debate. Now for 1 to equal 0.99… you should be able to perform any number of legal operations to both sides of the equation 1=0.99…, the one that I am interested in is raising to a power phi.

1^phi as phi approaches infinity is 1. If we raise the polynomial expansion to a power phi the largest term will be (9/10)^phi. Now as phi approaches infinity this term approaches zero, and this is going to be the largest term of the series. Since the largest term of the series approaches zero, all other terms in the series must approach zero. Since all terms approach zero the series itself must approach zero.

So 1^phi does not equal 0.99…^phi. If this argument is flawed in some fashion I would be excited to hear about it.


Noticed a slight error in defining the series, n should go from 0 to infinity. Was debating how how I wanted to define it and changed my mind mid stream.


EPVH: There are several problems with your argument, the biggest of which is your claim that the sum of an infinite number of things which approach zero must itself approach zero. Consider, for example, 1/N+1/N+1/N+...(N times). As N grows arbitrarily larger, each term approaches zero, but the sum stays 1.


Worked the math and you are correct. It was only three lines to prove it to myself that my argument didn't work.

Sorry about wasting your time.


Polymath, keep up the mockery. Statements like "your skills in mathmatic [sic] theory lack common sense" clearly demonstrate that the treatment is well-earned. :)

Niranjan Srinivas

I do not blame you for your reaction...every human being has finite patience :-D

The comments to this entry are closed.

The .999... Posts That Made Me Briefly Famous

My Feeble Attempts at Humor

Other blogs I like

  • EvolutionBlog
    He writes mostly about evolution, but he's a math guy.
  • Good Math, Bad Math
    Scienceblogs finally has a math guy!
  • Kung Fu Monkey
    A very smart, high-profile screen writer and comic with sensible politics and an amazing ability to rant
  • Math Spectrometer
    My ideas about life, teaching, and politics
  • Pharyngula
    Biology, lefty politics, and strident anti-Intelligent Design
Blog powered by Typepad