I've been marking a lot of (first year undergraduate) coursework lately, and I'm been a fairly significant amount of work exhibiting the same problem. It's definitely in the minority, and I'm not sure whether it's getting more common, or I'm just getting more sensitive to its appearance, but I find it worrying.

I'd very much like to hear from people teaching (A-level or undergraduate) if they are finding similar issues, and if so, what remedies they find effective.

Here's an example of the type of working I mean for solving an equation:
\begin{gather}
x = \frac{2}{3-x} \nonumber \\
(3-x)x = 2 \nonumber\\
3x-x^2 = 2 \label{nolog} \\
x^2-3x+2 =0 \nonumber \\
(x-1)(x-2)=0 \nonumber \\
\end{gather}
therefore \(x=1\) or \(x=2\).

Consequently, I've been spending a lot of time giving feedback pointing out that a collection of equations isn't an argument, it's just a collection of equations. To make an argument (or solve a problem) you have to explain what the assertions have to do with each other. Sometimes one line is equivalent to the next, sometimes one line implies the next, and sometimes a line is implied by the next. Which of these happens really matters, because if you want to prove the last equation you need the implications to proceed down the page, but if you want to find the solution to an equation you need the implications to proceed upward.

So, in addition to pointing out the requirement to explain the logical connections on each script that required it, I also pointed out to the class that they would never see a mathematics book provide a sequence of equations without any indication of the logical connections: if they didn't believe me, they should just take out any of their current textbooks, or an A-level text from their previous year and look. There would always be an implication sign, or an equivalence, or a word like "hence" or "therefore" or possibly (depending on the nature of the sequence) "because".

Then I made an unfortunate discovery, by the simple expedient of looking in an A-level textbook. It didn't take me long to find sample calculations of type \eqref{nolog}. In the textbook, these sequences were always of equations which were each in fact equivalent to the next, but this wasn't said explicitly. Such examples must contribute to (a minority of) students not realizing it is important to explain the logical connections.

Of course, things are easily fixable in the case of \eqref{nolog}, since all the equations are in fact equivalent. But the consequence is that we then get things like:
\begin{equation*}
\begin{split}
|x+1| &= 2x \\
(x+1)^2 &= 4x^2 \\
x^2+2x+1 &= 4x^2 \\
3x^2-2x-1 &= 0\\
(3x+1)(x-1)&=0
\end{split}
\end{equation*}
therefore \(x=-1/3\) or \(1\). The problem is that third step, where the second equation implies the third, but not vice versa: a spurious solution is introduced. Alas, it gets worse.

The real problem of not being clear about the implications shows up not so much when an equation is being solved as when a result is being proved. For example, on asking for a proof that \(\sec^2(x)-1= \tan^2(x)\) one is likely to see something like:
\begin{equation}
\begin{split}
& \sec^2(x)-1 = \tan^2(x) \\
\Rightarrow \qquad & 1-\cos^2(x) = \sin^2(x) \\
\Rightarrow \qquad & 1 =\cos^2(x)+\sin^2(x)\\
\Rightarrow \qquad & 1=1 \qquad \checkmark\\
\label{duff}
\end{split}
\end{equation}
Each implication is correct, but the argument presented is that IF the required result is true, THEN \(1=1\). Unfortunately, this does nothing to prove that the required result is true.

This time out it showed up most obviously in a proof by induction; to prove that \(P(n)\) held for all integers \(n \geq 0\), just about everybody started off by proving \(P(0)\). But then a bunch of them continued by assuming \(P(n)\) was true when \(n=k\) for some arbitrary \(k \geq 0\), and then proceeded to write down
\(P(k+1)\) and deduce \(P(k)\) from it.

Consequently, I've been trying to stress the importance of the logical flow from what is known to what is required, rather than vice versa.

I've also taken to showing the following "proof" as (I fondly hope) an example of why it is a mistake to present "proofs" like \eqref{duff}:
\begin{equation}
\begin{split}
& 1 = 2 \\
\Rightarrow \qquad & 0 \times 1 = 0 \times 2 \\
\Rightarrow \qquad & 0=0 \qquad \checkmark\\
\label{duffer}
\end{split}
\end{equation}
I'm assured by everybody I show it to that \eqref{duffer} is obviously wrong (which is something of a relief). I guess I'll find out how successful I've been when the exam scripts come in...

## No comments:

## Post a Comment