Thursday, 8 June 2017

0.999...: is nonstandard analysis the answer?

I previously blogged here about the perennial discussion on \(0.\dot{9}=1\), carefully avoiding nonstandard analysis. I also blogged here about nonstandard analysis, being equally careful not to mention \(0.\dot{9}\). Some consequent conversations have made it evident that a look at the interaction of these two might be of interest.

\(0.\dot{9}\) meets nonstandard analysis

It is tempting (so tempting) to think that although \(0.\dot{9}=1\) in normal analysis, things will be different in nonstandard analysis. After all, there are infinitesimals there, so presumably that's a place where it makes sense for \(0.\dot{9}\) to be infinitesimally less than \(1\). But of course, just as in the real analysis case, we have to decide what we mean by \(0.\dot{9}\).

Well, whatever it means in nonstandard analysis, it has to be a natural extension of what it means in standard analysis, where \[ 0.\dot{9}= \lim_{N\to \infty} \sum_{n=1}^N \frac{9}{10^n}. \]

There are different ways of trying to transport this into nonstandard analysis, with different consequences. Here are the ones that come to my mind.

\(0.999\ldots\) is infinitesimally less than \(1\)

First, we might try to interpret that string of digits in a plausible way, forgetting about the 'infinite sum as a limit' semantics.

Then one might try to interpret \(0.\dot{9}\) via the definition of a nonstandard real as a sequence of real numbers, and a fairly natural choice of sequence is \(0.9,0.99,0.999\ldots\), i.e. the sequence of partial sums for the limit. Clearly, since every term in the sequence is less than \(1\), the corresponding nonstandard real is less than \(1\), and since eventually every term in the sequence is bigger than any real number less than \(1\), the corresponding nonstandard real must be less than \(1\) by an infinitesimal quantity, say \(\epsilon\); so the sequence defines a nonstandard real, \(1-\epsilon\). In fact, this \(\epsilon\) is the infinitesimal defined by the sequence \(1/10,1/10^2,1/10^3,\ldots\).

But there's a problem with this. Why choose this particular sequence? We could equally have used \(0.99,0.999,0.999,\ldots\) which also defines a nonstandard real infinitesimally less than \(1\), but this time the result is \(1-\epsilon/10\); or \(0.99,0.9999,0.9999999\ldots\), which gives \(1-\epsilon^2\). There are a great many possible choices, all giving different limits infinitesimally less than \(1\).

This is not good: if \(0.\dot{9}\) is to be infinitesimally less than \(1\), it should be a well-defined nonstandard real number, and the infinitesimal should be unambiguously defined. This is a fail on both criteria. We need to try something else.

\(0.9\ldots9\) with an infinite number of \(9\)'s is infinitesimally less than 1

Let \(N\) be a hyperfinite natural number (i.e. a natural number in the nonstandard reals that is greater than any \(n \in \mathbb{N}\)). Then the transfer principle tells us that just as for finite \(N\), \[ \sum_{n=1}^N \frac{9}{10^n} = 1- \frac{1}{10^N} \] and \(1/10^N\) is, of course, an infinitesimal.

This looks a bit like the previous answer: the different choices of \(N\) correspond to different sequences of terms of the form \(0.9...9\), and give different infinitesimals. But it does have some more information. A bigger \(N\) corresponds to a sequence of natural numbers that grows faster, and so a sequence that converges to \(1\) faster: and this is what it means for the infinitesimal difference to be smaller. Nevertheless, we still don't obtain a well-defined infinitesimal that is the difference between \(0.\dot{9}\) and \(1\).

In fact, it's not just a bit like the previous answer: it is the previous answer, but expressed in a different language. Let's take a closer look.

If \(N\) is a hyperfinite natural number, then it corresponds to a divergent sequence \(n_1,n_2,n_3\ldots\) of natural numbers. This defines a new sequence \(p_1,p_2,p_3,\ldots\) where \(p_i\) is \(0.9\ldots 9\) with \(n_i\) nines. This sequence of real numbers, \(p_i\), determines the nonstandard real \(\sum_{n=1}^N 9/10^n\), and the sequence \(1/10^{n_i}\) determines the infinitesimal difference \(1/10^N\) between this number and \(1\).

So each of these approaches gives us something infinitesimally less than \(1\) but is really satisfactory.

In fact, we get a more satisfactory situation by saying that

\(0.999\ldots = 1\) in nonstandard analysis

Perhaps the most mathematically natural interpretation of \(0.\dot{9}\) is by simply to use the definition above: \[ 0.\dot{9}= \lim_{N\to \infty} \sum_{n=1}^N \frac{9}{10^n} \] but remembering that now \(N \in {}^*\mathbb{N}\), so that \(N\) can be larger than any finite integer.

We also have to remember what this means: given any \(\epsilon \gt 0\), there is some \(K\) with the property that \(\sum_{n=1}^N 9/10^n\) is within \(\epsilon\) of the limit whenever \(N \geq K\). If \(\epsilon\) is infinitesimal, then \(K\) is hyperfinite.

But then the transfer principle guarantees that the standard analysis argument still works, and the limit is \(1\), just as in standard analysis. This is clear and unambiguous, if a little disappointing.

There is another way of thinking about it that I'm going to consider, though, and it's the strangest one, and the hardest to understand (which is partly why I've kept it till last).

\(0.999\ldots\) isn't even a thing in nonstandard analysis.

We might think of \(0.\dot{9}\) in a slightly different way: \[ 0.\dot{9} = \sum_{n \in \mathbb{N}} \frac{9}{10^n} \] forgetting (for the moment) that this is really just an evocative shorthand for \[ \lim_{N \to \infty} \sum_{n=1}^N\frac{9}{10^n} \] which is itself just a notation for 'the thing I can get arbitrarily close to by making \(N\) sufficiently large'.

Let's take the notation seriously, and think of it as representing the sum of a collection of numbers indexed by \(\mathbb{N}\).

There is something very strange here. It seems that summing over a finite number of terms gives an answer less than \(1\), summing over all natural numbers gives exactly \(1\), but then summing a hyperfinite number of terms gives an answer that is less than \(1\). How can adding more terms make the answer smaller again, even by an infinitesimal amount?

This is where the waters become relatively murky.

When we talk about a property of (a set of) real numbers in nonstandard analysis, what we're really doing is talking about a property of the representatives of (the set of) these real numbers in the nonstandard reals. Now, a single real number does have a natural nonstandard real that it corresponds to, and it's the obvious one. This works for any finite set of real numbers. But the representative of an infinite set of real numbers always acquires new elements. So the natural representative of the set of all natural numbers, \(\mathbb{N}\), is \({}^*\mathbb{N}\), which includes all the hyperfinite natural numbers too.

In fact there is no way to specify the set of finite natural numbers in \({}^*\mathbb{N}\) in the language of nonstandard analysis. It simply isn't a subset of \({}^*\mathbb{N}\), just as there is no set of all infinitesimals, or of all hyperfinite natural numbers.

Actually, simply is something of a lie here: Sylvia Wenmackers (@SylviaFysica) called me on this, so here's my attempt to explain what I mean by it.

Again, we have a rather subtle idea to struggle with. There are collections which we can describe in the language of nonstandard analysis, which we call internal sets. These are the meaningful sets, since they are the only ones we can talk about when we are working within nonstandard analysis. The other sets, the ones which require the use of descriptions such as infinite or infinitesimal, are called external sets. We can only talk about them when we talk about nonstandard analysis, looking at it from outside. They don't correspond to anything meaningful inside nonstandard analysis. Depending on the way that you like to do your nonstandard analysis, you can think of internal sets (part of the system) and external sets (not part of the system); or you can think of sets and descriptions which don't describe sets at all.

The situation is a little like that of trying to talk about the collection of all sets in set theory. If you try to make that collection a set, then you get into all kinds of problems, so you are forced into the position that some collections are sets, and have all the properties we require of sets but others, which are called proper classes are not, and do not.

This is hard to come to terms with, but we really don't have any option. It's the price we have to pay for having access to infinitesimal and infinite quantities in our number system, if we want to retain the properties that make things work well, such as every set of natural numbers having a smallest member (there is no smallest hyperfinite natural number), or every set with an upper bound having a least upper bound (there is no least upper bound for the infinitesimals).

To summarize: the only objects which exist meaningfully in nonstandard analysis are the ones which can be described using the language we already have for real analysis: and that doesn't allow us to talk about infinitesimal or infinite quantities. If we work purely inside the nonstandard reals, we can't tell that some are infinitesimal and some are infinite, because we can't pick out the standard reals to compare them with. It's only when we look at the system from outside that we can tell that some nonstandard reals are smaller than all positive reals, and some nonstandard natural numbers are larger than all standard natural numbers.

So the answer to the question up above is likely to feel unsatisfactory: it is that that isn't actually a question, because it refers to things that don't make any sense in the context of nonstandard analysis.

So where does that leave us?

It leaves us with the conclusion that the situation for \(0.999\ldots\) is exactly the same in nonstandard analysis as it is in standard analysis. Either the string of \(9\)'s terminates, in which case the number is less than \(1\); or it does not terminate, in which case it represents a limit, and the corresponding number is exactly \(1\).

What we gain in the nonstandard interpretation is that the string might terminate after a hyperfinite number of terms, in which case the result is less than \(1\) by an infinitesimal. In this case we can think of the hyperfinite number of terms as specifying a particular rate at which the number of \(9\)'s increases, and the infinitesimal as telling at what rate the sequence of numbers of the form \(0.9\ldots9\) approaches \(1\): the hyperfinite and infinitesimal quantities have an interpretation in terms of the standard reals in which they tell us about rates of convergence.

It has been argued that one reason for the widespread conviction that \(0.\dot{9} \lt 1\) is that people are somehow (implicitly, inchoately, even unwittingly) thinking of \(0.\dot{9}\) as a terminating infinite, i.e. hyperfinite, sum, requiring the addition of an infinitesimal \(0.0\ldots01\) term (with a terminating infinite, i.e. hyperfinite, number of zeros) to give \(1\). I'm not entirely convinced by this: it seems more to me that they're taking the 'obvious' inductive limit of the behaviour of a finite terminating decimal expansion, and that this interpretation is more eisegesis than exegesis of the phenomenon. But I have encountered people trying to argue that the difference is a decimal expansion with infinitely many \(0\)'s followed by a \(1\), so the idea can't be entirely discounted.

After all that, is nonstandard analysis the answer?

No. nonstandard analysis isn't the answer. But it does give us a framework for thinking about the question in a fruitful way, and for asking (and answering) some interesting questions. Maybe that's even better.

No comments:

Post a Comment