Subtraction: What is “the” Standard Algorithm?
One common complaint amongst anti-reform pundits is that progressive reform math advocates and the programs they create and/or teach from “hate” standard arithmetic algorithms and fail to teach them. While I have not found this to be the case in actual classrooms with real teachers where series such as EVERYDAY MATHEMATICS, INVESTIGATIONS IN NUMBER DATA & SPACE, or MATH TRAILBLAZERS were being used (in fact, the so-called “standard” algorithms are ALWAYS taught and frequently given pride of place by teachers regardless of the program employed), the claim begs the question of how and why a given algorithm became “standard” as well as how being “standard” automatically means “superior” or “the only one students should have the opportunity to learn or use.” It strikes me that it is almost as if such people are stuck in some pre-technological age in which we trained low-level white collar office workers to be scribes, number-crunchers who summed and re-summed large columns of figures by hand, etc. The absurdity of seeing kids today as needing to prepare for THAT sort of world is evident to anyone who spends any time in a modern office, including that of a small business. Desktop and handheld calculators are commonplace. So are desktop and laptop computers, not to mention tablets or smartphones in shirt pockets running Desmos, Wolfram|Alpha, etc. There is a need for people to understand basic mathematics, but not to be fast and expert number-crunchers in that 19th-century sense.
Thus, it seems reasonable to ask what should be an obvious question: if the goal is to know what numbers to crunch and how (what operations need be used) to crunch them, and, most importantly, to correctly interpret and make decisions based upon the results of the right calculations, and further if it is glaringly obvious that the actual number-crunching itself is done faster and more accurately by machines than by the vast, vast majority of humans can reasonably expect to do, why would any intelligent person be obsessing in 2017 over the SPEED of an algorithm for paper and pencil arithmetic? For the big argument raised for always (and exclusively) teaching one standard algorithm for each arithmetic operation seems to be speed and efficiency.
I have argued repeatedly that the efficiency issue is only reasonable if one fairly assesses it. And to do that is to grant that a student who misunderstands and botches ANY algorithm is unlikely to be performing “efficiently” with it. Compared with a student who uses even a ludicrously slow algorithm (e.g., repeated addition in place of any other approach to multiplication) accurately, the student who can’t accurately make use of the fastest possible algorithm is going to be taking a long time to arrive at the right answer which will be reached, if at all, only after many missteps and revisits to the same problem. For that student, at least, the “algorithm of choice” is not efficient at all. So finding one that the student understands and can use properly would by necessity be preferable. But not, apparently, in the mind of ideologues. For them, there’s one true way to do each sort of calculation and they are its prophets.
Of course, I’m not favoring teaching alternate algorithms because I dislike any particular standard one or feel the need to “prove” that, say, lattice multiplication is “better” than the currently favored algorithm. On the contrary, I’m all for teaching the standard algorithm. But not alone and not mechanically, and not at the expense of student understanding. Indeed, from my perspective, it’s difficult to understand why it is necessary to mount a defense for alternative algorithms in general, though any particular one may be of questionable value and might need some justifying or explaining. If anything, it is those who hold that there is a single best algorithm that is the only one that deserves to be taught who need to make the case for such a narrow position. In my reading, I’ve yet to encounter a convincing argument, and indeed most people who hold that viewpoint seem to think it’s glaringly obvious that their anointed algorithms are both necessary and sufficient.
What compounds my outrage at the narrower viewpoint is the fact that it is based for the most part on woeful historical ignorance. Previously, I’ve addressed the question of the lattice multiplication method, which has come under attack from various anti-reform groups and individuals almost certainly because it has been re-introduced in some progressive elementary programs such as Everyday Math and Investigations in Number, Data, and Space. The arguments raised against it are very much in keeping with above-mentioned concerns with speed and efficiency. Ostensibly, the algorithm is unwieldy for larger, multi-digit calculations. The fact is that it is just as easy to use (easier for those who prefer it and get it), and while it’s possible to use a vast amount of space to write out a problem, it’s not required that one do so and the amount of paper used is a social, not a pedagogical issue. But please note that I said RE-introduced, and that was not a slip. The fact is that this algorithm was widely used for hundreds of years with no ill effects. Issues that strictly had to do with the ease of printing it in books with relatively primitive technology and problems of readability when the printing quality was poor, NOT concerns with the actual carrying out of the algorithm, caused it to fall into disuse. Not a pedagogical issue at all, and with modern printing methods, completely irrelevant from any perspective. Yet the anti-reformers howl bloody murder when they see this method being taught. The only believable explanation for their outrage is politics. They simply find it politically unacceptable to teach ANY alternatives to their approved “standard” methods. And their ignorance of the historical basis for lattice multiplication as well as their refusal to acknowledge that it is thoroughly and logically grounded in exactly the same processes that inform the current standard approach suggests that bias and politics, not logic, is their motivation.
I raise all these questions because I had my attention drawn to a “non-standard” algorithm (actually two such algorithms and some related variations) for subtraction. Tad Watanabe, a professor of mathematics education whom I’ve known since the early 1990s posted the following on a mathematics education discussion list:
Someone told me (while back) that the subtraction algorithm sometimes called “equal addition algorithm” was the commonly used algorithm in the US until about 50 years ago. Does anyone know if that is indeed the case, and if so, about when we shifted to the current conventional algorithm?
I couldn’t recall having heard of this method, and so I was eager to find out what he was talking about. Searching the web, I discovered an article that repaired my ignorance on the algorithm: “Subtraction in the United States: An Historical Perspective,” by Susan Ross and Mary Pratt-Cotter. This 2000 appearance in THE MATHEMATICS EDUCATOR was a reprint of the article that had originally appeared several years previously in the same journal. It draws upon a host of historical sources, the earliest of which is from 1819. And there are other articles available online, including Marilyn N. Suydam’s “Recent Research on Mathematics Instruction” in ERIC/SMEAC Mathematics Education Digest No. 2; and Peter McCarthy’s “Investigating Teaching and Learning of Subtractions That Involves Renaming Using Base Complement Additions.”
The Ross article makes clear that as far back as 1819, American textbooks taught the equal additions algorithm. To wit,
1. Place the less number under the greater, with
units under units, tens under tens, etc.
2. Begin at the right hand and take the lower figure
from the one above it and – set the difference
3. If the figure in the lower line be greater than the
one above it, take the lower figure from 10 and
add the difference to the upper figure which sum
4. When the lower figure is taken from 10 there
must be one added to the next lower figure.
In fact, according to a 1938 article by J. T. Johnson, “The relative merits of three methods of subtraction: An experimental comparison of the decomposition method of subtraction with the equal additions method and the Austrian method,” equal additions as a way to do subtraction goes back at least to the 15th and 16th centuries. And while this approach, which was taught on a wide-scale basis in the United States prior to the late 1930s, works from right to left, as do all the standard arithmetic algorithms currently in use EXCEPT notably for long division (which may in part help account for student difficulties for this operation far more serious and frequent that are those associated with the other three basic operations, it can be done just as handily from left to right.
Consider the example of finding the difference between 6354 and 2978. Using the standard approach, we write:
Left to right subtraction?
I will not discuss or describe in detail the Austrian algorithm other than to say that it doesn’t feel “right” to me. That’s not saying it’s “wrong,” but rather that I can’t see it as one I would use. And here is one major difference between me and the reform-haters: that doesn’t mean I wouldn’t revisit it or wouldn’t show it to teachers, and perhaps if I saw a particular student or class for whom it might prove helpful, I’d teach it. My “taste” isn’t the issue, but rather keeping a large number of options available for my practice and for my students. I suppose that’s just not very “efficient” of me.
Finally, it bears noting that there are references in the above-mentioned articles to research on the use of these algorithms, and at least some reason to think that equal additions should be looked at again very seriously by mathematics teacher educators and K-5 teachers. If you read the historical treatment of subtraction algorithms in the US, you’ll likely note how much chance and arbitrariness there can be in how one particular algorithm comes into fashion while others fall into disuse. I see no firm evidence for the “superiority” of the current most commonly-taught algorithm, and there is clearly a history of it’s causing difficulties for particular students. Would the universe collapse if we were to teach both? Even more, would it collapse if we didn’t rush to teach it right away, but rather, as has been proposed by more than a few researchers and theorists on early mathematics education, let students play and invent their own algorithms first, before trying to steer them toward one or another of our own? Sadly, the anti-reformers amongst us, the activist educational conservatives who are constantly trying to narrow rather than open up K-12 education, believe that there’s always one best way to do everything. And not coincidentally, that way always turns out to be the one they learned as a child. That, more than anything, is why I think it reasonable to call the not-so-traditional math that they push on everyone “nostalgia math.” It’s not that what they learned is better. It’s just what they learned back in simpler times when life was easy and there were no Math Wars and no one like me to suggest that their emperor is stark naked.