« Uhhhhh, wow. | Main | And Finally... »

Comments

Steve Albright

"Showing that different bases can represent these problem fractions without repeating decimals would only help my case, not hinder it."

Can anyone explain this to me? Adam?

edomaur

My turn....

In my understanding, the limit to 1 is not exactly the same as 1. In a way, 1/3 does not equal 0.333... if you are not able to understand what is the meaning of that "...".

0.9999... does not equal 1, but is an infinitely precise approximation of the value. Here, "infinitely" is as précise a value as you want. In fact, 1 is the absol...
..
.
damn, I realise that it doesn't matter, English is not my native language and I will not be able to share my view on the subject.

In a way yes, 0.9... vaguely equal 1. But you have to understand again that a limit to 1 can be a more or less exact 1. This is the goal of the limit, and of all the infinitesimal calcul after all.

Karl

I think that those of you trying to explain .999...=1 are taking the wrong tack. the problem is lack of understanding of notation, or of the difference between number and numeral. The numeral at question is ".999,,,". It includes the "..." which has a specific DEFINED meaning. That's what all those objectors keep missing - the meaning of it - which is that the 9's go on FOREVER, not just for 1000, or a million, or something. So if you multiply that number by 10 (which has the effect of moving each digit one place to the left) you get 9.999... And there are still as many 9's to the right of the decimal point as there were before. Now do the arithmetic: x=.999... 10x=9.999... So 9x = 9 and voila.

Monimonika

To add to what Karl mentioned, for those who doubt 10 x 0.33... = 3.33..., due to the reasoning that "it's missing a decimal place!", try this little thought:

10 x 1/3 = 10/3
"10/3 = 3 + 1/3" when converting the fraction into a mixed fraction.

We all must have to agree that 1/3 = 0.33... due to simple long division (to say otherwise is just too screwed up for words). So what do we get after a simple substitution?

10 x 0.33... = 3 + 0.33... = 3.33...

JD

Adam's busy right now. He's over at the Ministry of Love trying to convince Winston 2+2=5.

That's right Adam. I belittled you. Now tell me what 1/3 equals in decimal notation.

Timothy

JD, that was hilarious! Like, incredibly. Hey, and Steve asked you a question, Adam:

"Showing that different bases can represent these problem fractions without repeating decimals would only help my case, not hinder it."

What does this even MEAN? Did you even think about it before you wrote it???

They say people think twice as fast as they speak, so that they can think TWICE.

Abhinav

I really cant see why 1/3 is .333333... The back of my head tells me that it is infinitesmally larger than 0.3 and that 2/3 is the same w.r.t. 0.66666666... so their sum should be infinitesmally larger than 0.99999999... which is equal to 1.

Monimonika

Abhinav, can you do long division? You know, actually divide 1 by 3 by paper and pencil (without an electronic calculator)? After getting as far as 0.3 for the answer you go into an infinite loop of having a remainder of 1. Thus the repeating 3s at the end there.

No one should really be able to dispute that, unless you go into using different bases other than the standard base-10.

Xanthir

Re: 1/3 in a different base.

Writing things in different bases doesn't change the number at all. They are just ways of representing the number. Rather than each digit being a power of 10, it's a power of some other number. Other than that, base-3, base-16, or base-60000 are all the exact same. There is no number that is represented by one that can't be represented in another.

So, for example, in base-3 1/3 is simply .1. In base-10 1/3 is written as .333... In base-16 it is .555...

These are all the exact same number. The difference in representation is simply because of the way long division works. Any unit fraction (1/X) will have an infinite decimal if the X at the bottom doesn't evenly divide into the base that you're writing it in.

This isn't a failure of math, or an indication that the number isn't real. Again, it's simply a consequence of long division.

chat-loupe

Very funny and interesting comments (from an "ex-mathematician" point of view).

mark

I think the issue here is whether the limit is reached at infinity or not. It's obvious that .999... approaches 1 as n approaches infinity, but when n = infinity*, does .999... = 1? I haven't yet been able to find any math resources that adequately deal with this subject.

*- if you doubt this, ask yourself how many repeating 9s there are

adamsj

mark,

What happens is that n doesn't approach or reach infinity. It increases without limit--or increases infinitely, if you want to use the word in there somewhere--and never does reach anything. It just keeps on going, through all the integers.

In the same way, it's not that the limit is reached, but that there's no lower number than the limit which isn't exceeded by the continuing decimal. It's the least upper bound of the continuing decimal--it's the smallest of all the numbers which are greater than any term in the decimal.

"To infinity, and beyond!" is a cool slogan from a really great movie, but it's not someplace you can go with numbers.

manyoso

First, I want to say that I know (like all mathematically educated folks I know) that .999... does, in fact, equal the number 1.

However, Mr. Polymath, I see that a lot of the people who fail with this concept are getting hung up on notation and decimals VS integers.

So, I have an alternative question that I hope might eliminate these hang ups, but I'm not sure if it is real...

It seems to me that the conjecture can be restated as follows:

The number NINE repeating is EQUAL to the number ONE with ZERO repeating.

In other words:

999... = 1000...

is an equivalent statement to:

.999... = 1

Is this true?

Chip

The problem with that is that the "numbers" 999... and 1000... do not exist. By saying 999... = 1000... then you are essentially equating infinity as a number (by multiplying infinity to both sides of the equation). This is simply invalid.

mark

adamsj,

Thank you for your reply. I can begin to see why there is debate over this issue now. Perhaps the hinging point in the debate is "how do you define a number?" From what I understand (and I'm not an expert at math), if you have two numbers that are not equal, you should be able to draw a line between them and plot points on this line. There are no points between .999... and 1, so does that make them the same number?

schnubbi

manyoso:

999... and 1000... are not defined, so a statement like

999... = 1000...

is neither true nor false, it just makes no sense.
If you think they have a meaning (and let it be just your own one), give me some examples how they relate to "normal" numbers.
Like, what is 999... + 23?
If you can't answer that, it shows that your concept of 999... doesn't fit into the world of integers/rationals/irrationals (as "+" is defined for all of them).

Of course you can make up a new calculus of such numbers (maybe there even exists one), but that is another "language" then, and transferring statements from classical mathematical language over isn't possible.

adamsj

mark,

That's a good way to think about it. You're getting at the heart of it now.

Paul S

Polymath, you wrote:

"'.999... = 1 if you allow limits, but not if you're just talking about numbers. The limit of the series isn't the same as adding up the numbers.'

"The evolution blog linked at the beginning of this post has an excellent discussion about this. The upshot is that once you admit that there's an infinite geometric series here (which you have admitted as soon as you merely write .999...), there is no difference between the limit and what the thing actually equals. They have to be the same by any defintion that is internally consistent."

Well, I'm glad that my question was only slightly more ridiculous than the claim that 1/3!=0.333... Personally I find that latter concept a good deal more ridiculous than the concept of the limit of the sum of a geometric series. But tastes differ.

I have to say in my own defense that what I actually did was present a question: If the limit of the converging sum is 1 as n approaches infinity, then does n ever reach infinity?

I now believe I have an answer to that question, based on comments here: that is, the question is badly formulated by the use of a poorly defined concept: infinity.

As I understand it, the infinity symbol is used simply to represent an unbounded series. That is, if we define some value X, and X is allowed to increase without limit, we say that "X approaches [infinity symbol]."

Clearly, then, the concept of "infinitely small" has no meaning in mathematics. "Small" describes the size of some measurable quantity, and as such it can't be represented by a negative number. We can say that "-1 is less than 0" and make mathematical sense, but in actual measurement the smallest possible number is 0.

What I'm trying to do here is illustrate the difference between the words we use to think and the concepts used in mathematics. I believe that disconnect is what underlies all of the intuitive discomfort with the claim "0.999... = 1." Am I making sense so far, teacher? :)

OK then, when we say "0.999... = 1" we are making the claim that there "is no difference" between those two numbers. The mathematical expression of that idea would be: 1 - 0.999... = 0

I think that writing it like that may help us to bridge the gap between philosophy and mathematics. Because if we actually try to perform that operation, we run up against the fact that 0.999... is an infinite expansion. That is, the number of 9s to the right of the decimal point is unbounded. IT DOES NOT END. "Infinity" is not a number of any sort: it's simply a way to express that concept of an unbounded series.

Therefore, if you actually try to perform the operation "1 - 0.999..." then clearly the result is 0.000..., or an unbounded series of zeros. And I don't think anyone could challenge the claim that

0.000... = 0

Thus we see:

1 - 0.999... = 0.000...

or

1 - 0.999... = 0

In plain English, that equation means "There is no difference between these two numbers. They are literally the same."

Do you think explaining it that way might be helpful?

mark

I had some more thoughts as I wrestled with this idea in my head today:

1) If .999... < 1, it is not possible to be plotted on a number line. The number would not exist. It could not be one by our definition. It could also not be less than one, because any value less than one will be less than .999...

2) If .999... < 1, .999... would not be a real number, and if it was a real number (hypothetically), .000...1 would also have to be a real number.

Lsquared

I'm a mathematician, and I love your original post. Very thoroughly explained (though I'm not surprised that you still have a lot of people who don't believe it--but they just don't really understand the decimal number system)...

Jason

Haven't seen anyone attack it from this angle, so I'll offer my two cents. If we switch our counting base to one that plays easily with one third we change the discussion entirely:

Base15 (1,2,3,4,5,6,7,8,9,a,b,c,d,e)
10 = E+1
1/3 = .5
2/3 = .a
3/3 = 1.0
These add up pretty and intuitively now.

However:
1/7 = .222222|2
6/7 = .cccccc|c
7/7 = .eeeeee|e = 1.0

Demonstrating the symbology lacks the
extensibility to illustrate the math,
but the math is always correct. we
can change symbology to demonstrate one
but we lost the ability to represent the other.

Jason

ok, I just noticed that in the end of this post PolyMath did touch on the base10 issue, but I still like the way I demonstrated it :-)

mark

I found a good article at abstractmath.org:

http://abstractmath.org/MM/MMRealNumbers.htm#_Density_of_the

Lzygenius

I don't know what happened to my earlier reply except maybe because I am still new to Typepad.

Anyway, .999999|9 doesn't equal 1, if it did then subtracting .999999|9 from 1 would be 0.

1.0 - .999999|9 = 10^-(infinity)

Although a like your proofs, you fail to realize the infintesimally small rounding errors you introduce. Geometric series may produce limits approaching 1 but again this is only approaching 1. Your example of 3 decimal 1/3s adding up to .999999|9 ingores that .333333|3 and .666666|6 are only the closest approximations of a perfect fraction (which is why fractions are prefered over decimals)

In practical mathematics .999999|9 is 1 but not in pure math.

mark@ridlen.net

Lzygenius,

I can see where you're coming from. So the implication is that .999... is not a real number, correct? Because an infintesimal is not a real number either.

-----------------------------

I have another question on this topic:
Does .000...1 behave any differently than 0, in any circumstance? In the same vein, does .999... behave any differently than 1, in any circumstance?

For example, what's .000...1 * 5? or .000...1 + .000...1?

The comments to this entry are closed.

The .999... Posts That Made Me Briefly Famous

My Feeble Attempts at Humor

Other blogs I like

  • EvolutionBlog
    He writes mostly about evolution, but he's a math guy.
  • Good Math, Bad Math
    Scienceblogs finally has a math guy!
  • Kung Fu Monkey
    A very smart, high-profile screen writer and comic with sensible politics and an amazing ability to rant
  • Math Spectrometer
    My ideas about life, teaching, and politics
  • Pharyngula
    Biology, lefty politics, and strident anti-Intelligent Design
Blog powered by Typepad