Pages

Thursday, 11 May 2017

0.999... it just keeps on going.

If you follow any kind of mathematical discussions on the internet, you'll probably have noticed that there's one particular topic that refuses to die: \[ 0.\dot{9} = 1 \] In the way of perpetual internet conversations, although the participants change, the actual positions are pretty constant.

One side of the conversation insists that \(0.\dot{9}\) is not equal to \(1\), but is a little (even infinitesimally) less than \(1\).

The other side correctly says (but somehow unpersuasively argues) that, on the contrary, \(0.\dot{9}\) is exactly equal to \(1\), and provides various reasons for that.

Here are what my extremely scientific survey suggests are two of the commonest arguments presented in support of that true statement.

The first goes something like:
We all know that \[ \frac{1}{3} = 0.\dot{3} \] so, multiplying both sides by \(3\) we obviously have \[ 1 = 3 \times \frac{1}{3} = 3 \times 0.\dot{3} = 0.\dot{9} \]
The second is some version of:
Let \[ 0.\dot{9} = S. \] We'll find an equation for \(S\) that tells us what it is.

Clearly, \[ 10S = 9.\dot{9} \] and so \[ 10S-S = 9.\dot{9} - 0.\dot{9} \] i.e. \[ 9S = 9 \] and therefore \(S=1\).
And yet somehow the intransigent "it's a bit less than \(1\)" brigade aren't quite convinced. But why not?

Let's take a closer look at those two arguments.

The multiplications both rely on operating on an infinite decimal expansion by manipulating it in a way that looks entirely plausible. It should, because in each case it is in fact correct. But in each case the truth is not actually obvious. Why should we believe that multiplying an infinite decimal expansion by \(3\) is the same as multiplying each term by \(3\), or believe that multiplying an infinite decimal expansion by \(10\) is the same as shunting the decimal point along one place to the left? The second argument also requires the subtraction of one infinite decimal expansion from another. The algorithms we have for doing such arithmetic start with the rightmost non-zero digit, and in these cases there is no starting position.

In fact none of these calculations is trivial. The distributive law obviously holds for any multiple of any finite sum, but a non-terminating decimal is not a finite sum. It might be tempting to say that the multiplication and subtraction work no matter how many terms in the expansion we use, so they also work when the number of terms is infinite. Unfortunately for that argument, it doesn't matter after how many digits of \(0.\dot{9}\) we truncate, the result is less than \(1\), so why isn't that also true when the number of digits is infinite?

We have, then, two serious issues.
  1. Most people don't actually have a clear understanding of infinite decimal expansions. They're introduced early enough in the school curriculum that everybody becomes familiar with them, but that isn't the same thing. So arguments involving them don't carry the psychological weight that they might.
  2. The inductive argument makes it so 'obvious' that \(0.\dot{9} < 1\) that the other approaches, persuasive as they might seem, can't displace this conviction.
I suspect that some combination of these issues (though doubtless not clearly formulated) is what makes it so hard for people to see that \(0.\dot{9}\) is actually equal to \(1\). In the end, when faced with two incompatible statements, people will prefer the one that causes less mental discomfort.

Now, we have to take seriously the fact that it really does take a considerable amount of work to give a genuine proof that \(0.\dot{9} = 1\). You have to explain what is meant by a non-terminating decimal expansion, which means that you have to explain the meaning of the limit of a sequence, and in particular the meaning an infinite series as the limit of a sequence of partial sums. This is a serious undertaking.

Once you've done that, you have various strategies available. Amongst them are proving that the multiplications above are legitimate, and so that with this additional detail the result is established. And by the time you've done all the heavy digging about infinite series, and what a limit is, and what the real numbers are, the recipient of your wisdom will find it much harder to retain their conviction that the sum is somehow "infinitesimally" less than \(1\). (I hesitate to claim that they will find it impossible.)

It's worth noting that some of those arguing that \(0.\dot{9} = 1\) may not have a much better justification for their position that those who argue that \(0.\dot{9} <1\). You can believe the right thing for the wrong reason.

So don't be dejected if you can't persuade somebody whose mathematical preparation doesn't extend to an understanding of an infinite decimal expansion as the limit of an infinite sequence of partial sums. For such people, it is (almost) inconceivable that a sequence of approximations can all be less than \(1\) and the limit be exactly \(1\). In fact, if you have persuaded such a person by means of one of the arguments up above, you probably ought to feel a little shabby about it: although the conclusion is true, the argument as presented is no (or at least not much) more rigorous than the (mistaken) intuition that each finite expansion of \(9\)'s is less than \(1\), so the infinite expansion is still less than \(1\).

Alas, you probably ought to be dejected if one of the arguments presented above was what convinced you that \(0.\dot{9}=1\). I'm afraid you were tricked, and it really is a bit more complicated than that. On the bright side, you have a fascinating journey ahead of you if you decide to fill in the gaps.

12 comments:

  1. I appreciate that this makes the important point that the proof that 0.9˙=1 is actually non-trivial, and that the commonly given arguments substitute mathematical trickery for addressing what's actually hard about the proof.

    But I think one should go further. The primary issue isn't so much the complexity of the proof as the fact that it digs deeper into the formal definition of the real numbers than many people are used to. It's very clear that most of the people who struggle with this are actually feeling their way towards an alternate axiomitization of the real numbers. With the benefit of a few centuries of work, we know to be skeptical of the naive theory of infinitesimals that people seem to find intuitive - there's no way to capture exactly the structure people seem to find intuitive.

    The accepted formal definition of the reals is a formalization imposed by mathematicians on an informal notion, and we should stop assuming that when non-experts talk about the reals, they automatically mean the same thing mathematicians do. In fact, they usually mean the naive notion, ill-defined as it is. (And some people do genuinely seem to mean something closer to what we call the hyperreals than to what we call the reals.)

    ReplyDelete
  2. Yes. I agree with a lot of this, and the observation that it delves deeply into the definition of the reals is certainly part of my argument. The difficulty lies in the gap between the naive notion of the reals (which generally isn't precise enough to admit a precise argument) and the formal standard mathematics one (which is precise enough, but takes a lot of effort to come to terms with).

    There are certainly contexts in which non standard analysis seems closer to intuition than standard, but the relationship (transfer principle) there is also extremely subtle.

    Thanks for your comments!

    ReplyDelete
  3. For the skeptical students I often ask them "If zero point nine repeating is less than one, what number is between them?"

    ReplyDelete
  4. Which should get into an interesting discussion about the structure of the reals...

    ReplyDelete
  5. I think I was persuaded by a variant of your second argument when I was in school, some 40 years ago. However the argument was constructed using only rationals. First we learned that rationals give rise to recurring or terminating decimal expansions. Then we learned that the algorithm in the second algorithm would recover the original rational from any recurring or terminating decimal expansion (the 10 in the example should be 1 followed by one 0 for each digit in the recurring sequence). Finally we were introduced to reals by observing that there were digit sequences that neither recur nor terminate.

    ReplyDelete
  6. Has anyone tried this approach to convince skeptical students?
    1
    = 0.9+0.1
    = 0.9+0.09+0.01
    = 0.9+0.09+0.009+0.001
    etc.

    ReplyDelete
  7. I don't know: of course, there are lots of non-rigorous arguments, and different ones will convince different people. If you try it out, I'd certainly be interested in how successful you find it.

    ReplyDelete
  8. If 0.9999etc was a price and i paid with a pound the till would calc my change for ever(boring and I'd died of old age) skip to eternity no change thou. If the price was a pound I'd be out the shop, can be more different

    ReplyDelete
  9. If 0.9999etc was a price and i paid with a pound the till would calc my change for ever(boring and I'd died of old age) skip to eternity no change thou. If the price was a pound I'd be out the shop, can be more different

    ReplyDelete
  10. I can see that the explanation with 1/3 has a big assumption at the beginning...but I am very curious as to what is wrong with the 9S version.
    Aside from seeming sound it also is v similar to the (now alleged!) proof I use for the sum of a geometric sequence...
    Is my (limited) knowledge all built on sand and I am just regurgitating maths comforters to the masses?

    ReplyDelete
  11. It's mostly the (easily fixed) problem of multiplying an infinite sum by a constant giving the same result as multiplying each term by a constant. The distributive law only tells you that for finite sums, so there's a proof of convergence needed. (And if you're very picky, subtracting infinite decimal expansions; the algorithm never terminates, so again you have to assume that something 'obvious' in this particular case, but a little technical, is true.) Not built on sand, just lacking a little bit of detail.

    The usual gaps in the argument for the infinite geometric series are this 'infinite distributive law' and showing that a^n really does give 0 in the limit.

    So whatever you do, it all ultimately rests on some properties of limits and convergence. With the 0.999... example, the properties are so 'obvious' it's hard to remember that you are using them.

    ReplyDelete
  12. Infinite decimals have a long and troubled history. 1/3 could be represented in bases that had 3 as a factor, like base 12, but it could not be represented in base 10. Mathematicians wanted all quantities to have a base 10 representation, so in the 16th century Stevin created the basis for modern decimal notation in which he allowed an actual infinity of digits.

    For over 200 years mathematicians were troubled by infinite decimals. In order to determine that 9/10 + 9/100 + ... all add-up to 1 would require an infinite amount of work. Another problem is that if we add more terms then we have still only added a finite number of terms. It seemed that the addition of infinitely many terms was impossible.

    They used the trick of saying 0.9 + 0.09 + ... cannot add up to anything else, so it must add up to 1.

    There were still more problems. If you take any of the infinitely many terms in the series 0.9 + 0.09 + 0.009 + ... then the sum to your chosen nth term is given by the expression 1 - (0.1)^n which means that the sum is always a non-zero distance away from 1. This holds for ALL of the infinitely many digits meaning that no term CAN POSSIBLY EXIST where 1 is reached. This variation on Zeno's dichotomy paradoxes appeared to be solid proof that 0.999... cannot equal 1.

    The argument that there is no number between them fails. We begin by assuming that the series with the nth sum of 1 - (0.1)^n is a different number to 1.

    Now if we say 1 is the series that has the nth sum of 1 - 0^n then we can easily find a series halfway between 0.999... and 1, which is the series with the nth sum 1 - (0.5)(0.1)^n and so it is easy to find as many 'numbers' as we like between 0.999... and 1. We cannot presume that when we convert these series into decimal form they will all become equal to 1, because that would mean that our starting position is that 0.999... already equals 1.

    In the early 19th century Bolzano and Cauchy introduced the apparatus of limits and convergence. Now you should no longer think of 0.999... as the endless sum 9/10 + 9/100 + 9/1000 + ..., instead you should think of it as being the 'limit' that the increasing (partial) sum is approaching.

    The serious problems with this approach are still there, but are less obvious.

    With the limit approach, when you see the symbol 0.999... you should think of its value as being what is returned from the function: THE-LIMIT-OF[9/10 + 9/100 + 9/1000 + ...].

    This approach was generalised, so that all decimals were then said to contain endless digits. For example, 2.5 would now be 2.5000... (i.e. it contains 'infinitely many' trailing zeros).

    The first problem is that if this limit function returns a decimal value, then in order to assess the value of that decimal we again need to call the limit function. We end up in an endless loop of calling the limit function. To avoid this problem, we claim that when we call this limit function for 0.999... then it returns the rational 1.

    But the limit cannot always be described as a rational. For example, THE-LIMIT-OF[4/1 - 4/3 + 4/5 - 4/7 + ...] cannot return a rational and it cannot return a decimal, all it can return is the symbol pi. We have to imagine that this symbol can equal a constant value. Finitists do not accept this imagined existence.

    A second problem is that in order to convert 'infinitely many' terms into a constant like pi, the function would have to do an infinite amount of work.

    Thirdly, if it processes more and more terms it will still only have processed a finite number of terms. How can the function process the actual infinity of terms in pi to find its constant value?

    If we think of pi and the square root of 2 as functions that allow us to get as accurate a real-world measurement as we need, then we don’t have the infinity-related problems that we have if we try to think of them as constants. Sadly this approach does not fit with the mainstream Platonist position which is that perfect forms, like a perfect circle and a perfect diagonal of a unit square, MUST somehow exist.

    ReplyDelete