 Brent,

Since my last reply to you is quite long, I understand that you need time to mull it over.

However, I would really like it if you would work with the real number set (as defined) rather than sticking your own made-up (and contradictory) definitions to it. I have tried telling you this before by using an example (which you found insulting, and then completely ignored its very, very simple point) and giving you a link to the actual definition of the real number set.

Brent wrote:

[But I should say that I am finding myself to be in the constructivism philosophy of math category.]

I take this to mean that you would rather construct your own number system than work with an established one. If this is your intention, then it is a waste of time to argue with you since you would simply ignore everything I or anyone else are saying and just make up your own number world for the express purpose of saying that 0.999... != 1. Fine, go ahead. But please do not do it here, since it would be completely off-topic (you defining your own number system has nothing to do with the established real number set and its conclusions).

It's one thing to have misconceptions. It's another thing entirely when you admit that you are deliberately going to ignore established definitions and instead will make up your own for each of the math terms discussed.

Brent wrote:

[After reading the last few posts, I am going to have to agree that 0.999.. is not a number because it is a process.]

..."agree"? Which posts are you "agreeing" with? Do you have a new definition for the verb "to agree" to go along with your redefinition of mathematical terms?

Wait, I forgot, you're going by your very own personal definition of "number", in which fractions like 1/3, 2/7, 4/13, 5/14, 8/9, 4/34, etc. are all "non-numbers" when in decimal form. According to you, the ONLY things that can be "numbers" in decimal form are fractions in which the denominator can be reduced to either 1, powers of 2, powers of 5, and/or product of multiplying powers of 2 with powers of 5.

Go on, try to do some calculations on your calculator and see what you get for various fractions. Or, better yet, get a paper and pencil and do the long division until you either reach a termination point, or you realize that there's a pattern repeating.

I can go on to try to explain further, but that would involve using established definitions of the decimal system and number bases, of which you are going to completely ignore and replace with your own versions of, so I will stop here for now.

As for correcting your answers to questions 1 and 2, would you also take the time to explain how multiplication works for you? Is "2.4 x 3" the same as doing "2.4 + 2.4 + 2.4"? Is multiplying a number by 10 the same thing as adding up ten of that number?

If not, please state how your version of multiplication works when multiplying with non-10 numbers (since, according to you, multiplying by 10 shifts the decimal point and sticks on a zero at the right-hand "end" of a number).

If so, does it strictly follow that the decimal point is moved at some point during the addition? I'm guessing none of those who disagree that .9999....(recursive) is equal to 1 have ever taken discrete math in college. Of if they have, they failed because they didn't pay attention. What arguing do you need? There's a formal proof on the initial post of this blog. Either find an error in one of the steps of the proof and show that it is indeed an error or shut up. 0.33333.... is always a bit lesser than 1/3, so you cannot prove that 1/3 + 2/3 =3/3 is the same as 0.33333....+0.66666....=0.99999.... Hilarious !!!

I strongly advise to not state
1 = .999... at a party, you'll
only get burnt by the mob. Just wondering, is .9 repeating a real number? i thought all non-ending non-repeating decimals were real but that if they repeated they weren't real. but i haven't had math in years so i'm probably wrong. if you could email me the response that would be great otherwise i may not get it. No, they're both real. Rational numbers have repeating non-terminating decimals, or they are terminating. The more accurate definition of a rational number, however, is that it's a fraction, p/q, and p and q are integers. And irrational numbers are the opposite: they're non-terminating and non-repeating decimals, and you can't write them as a fraction like with a rational number. So 2^1/2, pi, e, etc. are good exampls of them. Either way, irrational and rational numbers are real. Complex (or imaginary) numbers are not real. And yumwind, 0.333... IS 1/3. If 0.333 eventually terminated, then it wouldn't be quite 1/3. The problem with the way you see it is because you can't write out all the 3's after the decimal place because there are an infinite number of them.

Look over some of the proofs again. They explain quite clearly how these decimals are exactly the values you say they aren't. This is true.

When I was 12 years old a Maths Teacher taught it by writing 0.9999999 on the board, and asked us to express it in a different way.

The way I chose was "1 - a reciprical of a very large number" (i.e. 1 - 1/infinity). If there was such a way to express infinity, 1/infinity would be near as possible to 0, if it wasn't, then add one to inifity and that would be nearer 0.

You don't have to put a definite value to something to know it's effects. 0.9reoc. = 1.

There's a difference between us engineers and you scientists, we get the girl!!!! There's a difference between engineers and mathematicians, in that engineers would be content with a mere example, but mathematicians could not, and would only be content unless they had a rigorous proof.

An example does not prove your point because it is still possible to find a counterexample. So the implication here is that, if you have a number infinitesimally close to one, adding an infinitesimally small number to it won't create a sum equal to one? Andrew,

First off, understand that the whole 0.999... = 1 thing is based on the accepted definitions of decimal notation and the set of Real Numbers.

From what you wrote, it is obvious that you are already convinced that 1 > 0.999..., and that you think you can count real numbers in sequence just like you can with integers.

Integers:
... -2,-1,0,1,2,3, ...

Reals:
... 0, ?
(Wait, what comes after, or even before, 0?)

Changing the definition of what the decimal notation means (as in, switching out "0.999..." and replacing it with "0.999...9" or "0.999...0") invalidates your (and other deniers') entire argument.

It is understandable to have prior misconceptions. But it's considered dishonest if, after looking up the definition of what the reals are, you still insist on sticking to your misconceptions.

There is no such thing as an infinitesimally small number in the Reals, nor can it be represented with proper decimal notation. Look up what decimal notation is before you try answering back with a made-up number like "0.00...1".

Between any two real numbers that are not equivalent, there are infinitely many other real numbers. This is why there is no such thing as "a number just below/right next to 1" in the reals. The fact that you cannot find even a single other real number between 1 and 0.999... means that they are the same number. Though the thread is very old, I cannot resist commenting on it. It seems to me that, as some have said, the reluctance of people to accept the proposition .9-repeating = 1 is a matter of symbolic confusion.

However, I don't think the confusion is in the representation of the numbers involved. My candidate for the problem is "=".

If "=" is taken as "identical to", then when the rigorous meaning of that in math is conflated with the everyday meaning of "identical" confusion ensues. Worse, people take the non-rigorous definition and insist of special rigor because "numbers are precise".

If it is taken as "evaluates to" then it can be understood as a mathematical operation, and not a factual comment that the two things are identical in every particular. They are identical in mathematical terms because the evaluate to something that is interchangeable for every mathematical purpose.

On the other hand, if we take the everyday concept of "approaching indistinguishable", and add the arbitrary rigor people want to place on numerical representations, that is, they must not differ at all (in this case the number of digits in the representation is clearly make them "different'!), we end up with a "self-evident" proof against equality.

This is a philosophical problem at many levels. Wittgenstein warned against assuming that just because a word can be *properly* applied to different things it is not a proof that those things share a fundamental essence. We humans tend to do this with abandon and confuse ourselves mightily.

I hope that was cogent enough to be worth reading. Thanks for a nice post. Yaakov,

I disagree that the main problem is what nay-sayers think "=" means. Yes, there are definitely those who really do seem confused on the meaning of "=". However, from what I see, the majority of the naysayers insist either that there's a non-zero difference between 0.999... and 1 (namely, 0.000...1), or that 0.999... is not a real number that exists.

In neither instance is the meaning of "=" directly disputed (the latter comes close, but is more about notation than anything).

The naysayers who argue that 1 is an integer and 0.999... isn't, therefore they can't be equal, may be what you mean by those who do not understand what "=" means. If there are others, I am interested in having them pointed out. Even though infinite .99999999 will never =1. When, in reality, will that last infinite .1 of unknown measure, unit, etc. ever even matter. Besides, when will something ever be created that will behold the exact measure, unit, etc. of 1.000000 to the last billion or whatever decimal place? So really, in the real world in which all these numbers are put in to work, the chances in which you will ever have something exactly the number you want it when taking account for those infinte decimal places, will never happen. So really 1:1 .999999:.999999 will only ever exist in your head, so stop wasting your life. Here's another argument that certainly differs from all the others here. (And btw, please stop asking me to name a number between 0.99~ and 1, it gets annoying)

Okay, let's turn this whole thing upside down for a minute. Math is a cool tool, and it certainly helps us understand the world around us, but so far, the physical world would seem eternally divisible. If the physical world could somehow be divided into multi-dimensional pixels, those would obviously be further divisible. Keeping this in mind, it really makes no sense to speak of integers at all.

Integers are convenient for solving equations that tell us important things about this universe, but it is the integers that are disconnected from our world, where there is no such thing as absolute precision. Absolute precision is the convenient shortcut created by man to solve equations, NOT infinity, which is abundantly present in the real world.

People are saying that infinity is a strange concept we can never understand but just have to accept. If we can't describe the real world with logic, maybe logic itself is the true approximation?

Thus:

0.5 is really 0.5000..
1-0.5 is then 0.49999.

0.4999 = 0.5000, but 1/2 is a flawed but convenient illusion. Most people here don't seem to realize that .1~ does not equal 1/9. Nor does .3~ equal 1/3. It's the most common way of representing it, but it is not exactly equal.

There may be no number between .9~ and 1, but that doesn't mean that they're the same, because .9~ isn't a number. It's a concept. Infinity isn't a number. It's a concept. MOOT, that is exactly my point, except the other way around. There is no such thing as absolute precision in the physical world. Sure you can have 3 oranges, but there are no fundamental physical equations that use integers for parameters - indeed quantum mechanics and superposition is an extreme example of this.

Infinity and eternal divisibility is very real in space and time, it is the integers that are only a human concept, where quantifying reasonably similar collections of particles and their infinitely complex correlations is a useful generalization. This question is not on the same topic but in the same category I believe (where our normal logic doesn't always hold true). Isn't there a way, using normal fraction rules, that 2 minus 2 does not equal 0 or something along those lines?

The comments to this entry are closed.

## Other blogs I like

• EvolutionBlog
He writes mostly about evolution, but he's a math guy.