I Don’t Get It vs. I Don’t Buy It

I was having a conversation a few weeks ago with a computer programmer and math enthusiast whom I’ll call Dorian. He was arguing very passionately that talking about a square root of -1 was the wrong way to introduce complex numbers. He recounted this moment in his own schooling: 16 year old Dorian, told by his teacher “we introduce a new number i whose square is -1…,” asking, “but I can prove that the square of any number is positive, what about that?!” His teacher wasn’t able to satisfy his objection and made him feel that it wasn’t valid. He left the experience feeling angry and frustrated and that his question had been treated as a failure to understand.

Dorian later learned that complex numbers can be visualized as a plane containing the real line; that addition of points in this plane is just vector addition; and that multiplication is done by multiplying the distances from the origin and adding the angles from the positive real axis (see here for a brief explanation if desired). Here was a concrete model for the complex numbers, with concrete geometrical interpretations of the operations + and \times. And it was clear to him that in this model, there is a point, in fact two points, whose squares correspond to the point -1 on the real axis. But philosophically, this fact is a consequence of the concrete geometrical description of the operations in the plane, rather than an ontologically dubious starting point for the whole project.

Dorian concluded that actually this model, via the geometry of addition and multiplication in the complex plane, is a pedagogically superior introduction to the complex numbers. His argument is that it presents no ontological quandary. Nobody will object to a plane. Nobody will object, at least on philosophical grounds, to these new definitions of + and \times, as long as you can prove they have nice properties and coincide with the old definitions on the real line. You’re not saying anything so wildly speculative as “postulate a square root of -1…”

I am not writing this post to get into the question of whether Dorian is right about this. I see lots to say on both sides. What I am writing this to say is that there is a lesson in Dorian’s story much deeper than the question of how to introduce the complex numbers. That is not the real question here as far as I am concerned.

The real question is this: when you’ve picked your approach and gone with it, how will you deal with the students it doesn’t work for?

Now you can always obsess about how to introduce a topic, and I believe there is basically always value in thinking and talking about the pedagogical consequences of different ways of looking at things. And I think some models for ideas are legitimately better than others. But no model will speak to every student. This point is so important, and was so lost on me as a young teacher, and is lost on so many (especially young) teachers that I have spoken with, so excited that they are about the way they have thought of to present negative numbers or whatever, as though miraculously everyone in the room will get it this time, that I need to repeat it:

There is no model that is the right model for each and every student, each and every time.

No matter how awesome your idea for how to think about XYZ concept is, there will be somebody in your class who will have no idea what you are talking about. To me, the big question here is, what are you going to do about it?

More specifically, how are you going to treat their thinking?

Now, I like to think that nobody reading this blog would be so callous as to intentionally make a student feel stupid for asking an honest question. But there are far subtler ways to do it. The one I most want to warn you against is the sin I know I’m guilty of: being so wrapped up in the awesomeness of your presentation that the kid who doesn’t get it does not compute to you. You say whatever you say out loud but in your mind you’re like, “wait – you don’t understand? Huh?” Or, you’re like, “oh my goodness can’t you just see it as I do?”

Regardless of what you say out loud, having such a response in the back of your mind invalidates whatever obstacle the student is facing. I want to suggest an alternative:

Take the case that any earnest failure of a student to see your point of view is actually coming from a legitimate mathematical objection.

This is how you treat dissatisfaction with honor.

I don’t care what the kid’s IEP says. Mathematical convention does not require us to check somebody’s Wechsler results before they are allowed to raise an objection. If they don’t buy it, they don’t buy it. Now it’s your turn to understand their objection and answer it.

“I don’t get it.” “I don’t buy it.”

A student I’ll call Manny, whom I had in my 2003-4 AP Calculus class, came to me around March and said something like, “this entire class is based on a paradox.” He objected to my (retrospectively totally hand-wavy) discussion of limits. It never gets there, so how can you talk about what happens if it were to get there?

I tried to answer Manny’s objections; I spent some time with him on it; but he left the conversation unsatisfied. Retrospectively it is clear to me that this is because (a) I didn’t get what the problem was, and (b) to my shame I didn’t consider the possibility that there was really much to it. Then, less than a year later, I read The Calculus Gallery, whereupon I learned that actually Manny’s objection was more or less exactly Bishop Berkeley’s famous objection that in due time forced mathematicians to invent real analysis. For a sense of the importance of this development, let me mention that I have read, though I don’t recall where right now, that the development of real analysis was really the event that led to the birth of modern mathematical rigor.

So, yes, I am on record as having treated as essentially invalid an objection that actually led to the creation of modern rigor. Don’t let that be you.

If they don’t get it, take the case that there’s a legitimate mathematical objection behind that. Treat their “I don’t get it” as “I don’t buy it.” Now getting them to buy it is your job.

Advertisement

Good Brawls and Honoring Kids’ Dissatisfaction

I was just reading some old correspondence with a friend J who periodically writes me regarding a math question he and his son are pondering together. The exchange was pretty juicy, about how many ways can an even number be decomposed as a sum of primes. But actually, the juiciest thing we got into was this:

Is 1 a prime number?

It was kind of a fight! Since I and Wikipedia agreed on this point (it’s not prime), J acknowledged we must know something he didn’t. But regardless, he kind of wasn’t having it.

Point 1: This is awesome.

Nothing could be better mathematician training than a fight about math. Proofs are called “arguments” for a reason.

When I went to Bob and Ellen Kaplan’s math circle training in 2009, I was heading to do a practice math circle with some high schoolers and Bob asked me, “what question are you opening with?” I said, “does .9999…=1?” He smiled with knowing anticipation and said, “oooh, that one always starts a brawl.”

Well, it wasn’t quite the bloodbath Bob led me to expect, but the kids were totally divided. One kid knew the “proof” where you go

0.999...=x

Multiplying by 10,

9.999...=10x

Subtracting,

9 = 9x

so x=1

and the other kids had that same sort of feeling like, “he knows something we don’t know,” but they weren’t convinced, and with only a minimal amount coaxing, they weren’t shy about it. The resulting conversation was the stuff of real growth: everybody in the room was contending with, and thereby pushing, the limits of their understanding. Even the boy who “knew the right answer” began to realize he didn’t have the whole story, as he found himself struggling to be articulate in the face of his classmates’ doubt.

Now this could have gone a completely different way. It’s common for “0.999… = 1” to be treated as a fact and the above as a proof. Similarly, since the Wikipedia entry on prime numbers says, “… a natural number greater than 1 that has no positive divisors…,” we could just leave it at that.

But in both situations, this would be to dishonor everyone’s dissatisfaction. It is so vital that we honor it. Everybody, school-aged through grown-up, is constantly walking away from math thinking “I don’t get it.” This is a useless perspective. Never let them say they don’t get it. What they should be thinking is that they don’t buy it.

And they shouldn’t! If it wasn’t already clear that I think the above “proof” that 0.999…=1 is bullsh*t, let me make it clear. I think that argument, presented as proof, is dishonest.

I mean, if you understand real analysis, I have no beef with it. But at the level where this conversation is usually happening, this is not a proof, are you kidding me?? THE LEFT SIDE IS AN INFINITE SERIES. That means to make this argument sound, you have to deal with everything that is involved with understanding infinite series! But you just kinda slipped that in the back door, and nobody said anything because they are not used to honoring their dissatisfaction. As I have pointed out in the past, if you ignore all the series convergence issues, the exact same argument proves that …999.0=-1:

...999.0=x

Dividing by 10,

...999.9=0.1x

Subtracting,

-.9 = .9x

so x=-1

If you smell a rat, good! My point is that that same rat is smelling up the other proof too. We need to have some respect for kids’ minds when they look funny at you when you tell them 0.999…=1. They should be looking at you funny!

Same thing with why 1 is not a prime. If a student feels like 1 should be prime, that deserves some frickin respect! Because they are behaving like a mathematician! Definitions don’t get dropped down from the sky; they take their form by mathematicians arguing about them. And they get tweaked as our understanding evolves. People were still arguing about whether 1 was prime as late as the 19th century. Today, no number theorist thinks 1 is prime; however, in the 20th century we discovered a connection between primes and valuations, which has led to the idea in algebraic number theory that in addition to the ordinary primes there is an “infinite” prime, corresponding to the ordinary absolute value just as each ordinary prime corresponds to a p-adic absolute value. Now for goodness sakes, I hope you don’t buy this! With study, I have gained some sense of the utility of the idea, but I’m not entirely sold myself.

To summarize, point 2: Change “I don’t get it” to “I don’t buy it”.

Now I think this change is a good idea for everyone learning mathematics, at any level but especially in school, and I think we should teach kids to change their thinking in this way regardless of what they’re working on. But there is something special to me about these two questions (is 0.999…=1? Is 1 prime?) that bring this idea to the foreground. They’re like custom-made to start a fight. If you raise these questions with students and you are intellectually honest with them and encourage them to be honest with you, you are guaranteed to find that many of them will not buy the “right answers.” What is special about these questions?

I think it’s that the “right answers” are determined by considerations that are coming from parts of math way beyond the level where the conversation is happening. As noted above, the “full story” on 0.999…=1, in fact, the full story on the left side even having meaning, involves real analysis. We tend to slip infinite decimals sideways into the grade-school/middle-school curriculum without comment, kind of like, “oh, you know, kids, 0.3333…. is just like 0.3 or 0.33 but with more 3’s!” Students are uncomfortable with this, but we just squoosh their discomfort by ignoring it and acting perfectly comfortable ourselves, and eventually they get used to the idea and forget that they were ever uncomfortable.

Meanwhile, the full story on whether 1 is prime involves the full story on what a prime is. As above, that’s a story that even at the level of PhD study I don’t feel I fully have yet. The more I learn the more convinced I am that it would be wrong to say 1 is prime; but the learning is the point. If you tell them “a prime is a number whose only divisors are 1 and itself,” well, then, 1 is prime! Changing the definition to “exactly 2 factors” can feel like a contrivance to kick out 1 unfairly. It’s not until you get into heavier stuff (e.g. if 1 is prime, then prime factorizations aren’t unique) that it begins to feel wrong to lump 1 in with the others.

I highlight this because it means that trying to wrap up these questions with pat answers, like the phony proof above that 0.999…=1, is dishonest. Serious questions are being swept under the rug. The flip side is that really honoring students’ dissatisfaction is a way into this heavier stuff! It’s a win-win. I would love to have a big catalogue of questions like these: 3- to 6-word questions you could pose at the K-8 level but you still feel like you’re learning something about in grad school. Got any more for me?

All this puts me in mind of a beautiful 15-minute digression I witnessed about 2 years ago in the middle of Jesse Johnson’s class regarding the question is zero even or odd? It wasn’t on the lesson plan, but when it came up, Jesse gave it the full floor, and let me tell you it was gorgeous. A lot of kids wanted the answer to be that 0 is neither even nor odd; but a handful of kids, led by a particularly intrepid, diminutive boy, grew convinced that it is even. Watching him struggle to form his thoughts into an articulate point for others, and watching them contend with those thoughts, was like watching brains grow bigger visibly in real time.

Honor your dissatisfaction. Honor their dissatisfaction. Math was made for an honest fight.

p.s. Obliquely relevant: Teach the Controversy (Dan Meyer)

Over the Course of an Instant…

As you may recall, I’m teaching analysis to this class of teachers, developing the \epsilon\delta limit. Two weeks ago I bewildered everybody. Last week and this week, I set out to bewilder everyone even further.

Let me say what I’m going for here. The \epsilon\delta limit is a notoriously difficult definition.1 How to scaffold my class to handle this difficulty? I am banking on the following strategy: make them need the definition. Make them unsatisfied with anything less. Continue poking holes in their current understanding, continue showing them inconsistencies between what they believe and the language they have to describe it, till they have no choice but to try to build something new. Then, let them try to build it. If they build the very thing I’m going for, rejoice. If they build something equally precise and powerful, rejoice. If they cannot build either (the most likely outcome, since the “right answer” took the world mathematical community 150 years to come up with), then it will still make powerful sense to them because it satisfactorily answers a question they were already engaged in trying to answer. That’s the plan anyway.

I will leave you with the two problem sets from the last class, and the readings and presentation from this one. I am very proud of the presentation. After that, I’ll write down one new thought for where to take this.

We engaged people’s attempts to define infinite decimals from the previous class, then abruptly shifted topics:

I let them work long enough so everyone got to do the first section of problems. My goals were:

1) Make participants recognize that they believe the speed of a moving object is something that exists in a particular moment of time.
2) Make them recognize that their naive definition of speed (distance / time) doesn’t actually handle this case.
3) Realize that we thus have a similar definitional problem as with repeating decimals.

We got this far. Then, with just 7 or so minutes left, I gave them another problem set:2

This problem set was designed to get somebody who has never studied calculus basically to take a simple derivative, to bring them into the conversation, and to refresh everyone else’s memory about the basic idea of derivatives. The last problem was on there just so that the calculus folks had a challenge available if they wanted it. Anyway, I had people finish the “Algebra Calisthenics” and “Speed” sections for homework.

This class, we began by engaging this homework, getting a feel for the standard calculus computation in which you identify the speed of an object in a moment as the value toward which average speeds seem to be headed as you look at smaller and smaller intervals. Then we began to press on what this really means.

I handed out a xerox of the scholium from the end of the first section of Book 1 of Newton’s Principia. (The last page of this pdf.) This is where Newton tries to explain what the hell he’s even talking about. I directed their attention to this telling sentence:

An in like manner, by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities, not before they vanish, nor afterwards, but with which they vanish.

Then, I showed them the following presentation. Wanting to share this with you is the real reason for this blog post. I had a lot of fun making it.

What’sCalculusReallyDoing (as pdf)

What’sCalculusReallyDoing (as powerpoint)

Then I passed out a choice excerpt from the awesome criticism of early calculus by Bishop George Berkeley. (Specif, section XIV.)

I asked for the connection between the definitional problem we have here and the definitional problem we had 2 classes ago regarding infinite decimals. (“They both involve getting closer and closer to something but never getting there.”) Then I asked them to try to come up with definitions to address these problems.

This is such a non-sequitur but here’s my one additional thought. I’ve been thinking about how to push participants to recognize a definition as unsatisfying. Tonight, reading Judith Grabiner’s 1983 essay in the AMM about Cauchy and the origins of the \epsilon\delta limit (here it is as a pdf), I had an idea that is totally new to me. Retrospectively I think it’s sort of obvious, but I totally never thought of it before:

To get people to recognize that a definition is mathematically inadequate, have them try to use the definition, for example to prove something! In my case, all of them think that 1/3 = 0.333… Great. So, if we have a candidate definition of the meaning of limits or convergence, can we use it to prove 1/3 = 0.333…? If not, maybe we need a better definition.

(I had this idea when I read Grabiner’s statement that thought Cauchy gave the definition of the limit purely verbally and a bit vaguely, he translated it into the more rigorous language of inequalities when he actually started using it to prove theorems.)

[1] This is for at least 2 distinct (though related reasons): first of all, it’s got three nested quantifiers. “For all \epsilon>0, there exists a \delta>0, such that for all x satisfying …” That just makes it inherently confusing. Secondly, it does not in any way psychologically resemble the intuitive image it is intended to capture. This is the definition of the limit. When I think of limits I have these beautiful visual images of little points getting closer to something. When I try to identify a limit, I just imagine the thing that they’re getting closer to. That’s the whole story. When I try to get rigorous, I replace this beautiful and simple image with three nested quantifiers. Yuck.

[2] You will notice some interconnections in the sequence of problems. After a few good experiences with this last year and then hearing how much fun everyone had at PCMI, I am beginning to feel like these sequences of densely but subtly interconnected problems are really, really awesome. Constructing them is a deep art and I am a tiny apprentice. But you can get started humbly and still see payoff: it was certainly a cool moment today in class when we went over these problems and a number of folks who had done out Speed problems #1-3 “the long way” realized that they could have applied their answer to Algebra Calisthenics #2 to do these three problems in moments in their heads.

The History of Algebra, part II: Unsophisticated vs. Sophisticated Tools

Math ed bloggers love Star Wars. This post is extremely long, and involves a fair amount of math, so in the hopes of keeping you reading, I promise a Star Wars reference toward the end. Also, you can still get the point if you skip the math, though that would be sad.

The historical research project I gave myself this spring in order to prep my group theory class (which is over now – why am I still at it?) has had me working slowly through two more watershed documents in the history of math:

Disquisitiones Arithmeticae
by Carl Friedrich Gauss
(in particular, “Section VII: Equations Defining Sections of a Circle”)

and

Mémoire sur les conditions de résolubilité des équations par radicaux
by Evariste Galois

I’m not done with either, but already I’ve been struck with something I wanted to share. Mainly it’s just some cool math, but there’s a pedagogically relevant idea in here too –

Take-home lesson: The first time a problem is solved the solution uses only simple, pre-existing ideas. The arguments and solution methods are ugly and specific. Only later do new, more difficult ideas get applied, which allow the arguments and solution methods to become elegant and general.

The ugliness and specificity of the arguments and solution methods, and the desire to clean them up and generalize them, are thus a natural motivation for the new ideas.

This is just one historical object lesson in why “build the machinery, then apply it” is a pedagogically unnatural order. Professors delight in using the heavy artillery of modern math to give three-sentence proofs of theorems once considered difficult. (I’ve recently taken courses in algebra, topology, and complex analysis, with three different professors, and deep into each course, the professor gleefully showcased the power of the tools we’d developed by tossing off a quick proof of the fundamental theorem of algebra.) Now, this is a very fun thing to do. But if the goal is to make math accessible, then this is not the natural order.

The natural order is to try to answer a question first. Maybe we answer it, maybe we don’t. But the desire for and the development of the new machinery come most naturally from direct, hands-on experience with the limitations of the old machinery. And that means using it to try to answer questions.

I’m not saying anything new here. But I just want to show you a really striking example from Gauss. (Didn’t you always want to see some original Gauss? No? Okay, well…)

* * * * *

I am reading a 1966 translation of the Disquisitiones by Arthur A. Clarke which I have from the library. An original Latin copy is online here. I don’t read Latin but maybe you do.

I’m focusing on the last section in the book, but at one point Gauss makes use of a result he proved much earlier:

Article 42. If the coefficients A, B, C, \dots, N; a, b, c, \dots, n of two functions of the form

P=x^m+Ax^{m-1}+Bx^{m-2}+Cx^{m-3}+\dots+N

Q=x^{\mu}+ax^{\mu-1}+bx^{\mu-2}+cx^{\mu-3}+\dots+n

are all rational and not all integers, and if the product of P and Q is

x^{m+\mu}+\mathfrak{A}x^{m+\mu-1}+\mathfrak{B}x^{m+\mu-2}+etc.+\mathfrak{Z}

then not all the coefficients \mathfrak{A}, \mathfrak{B}, \dots, \mathfrak{Z} can be integers.

Note that even the statement of Gauss’ proposition here would be cleaned up by modern language. Gauss doesn’t even have the word “polynomial.” The word “monic” (i.e., leading coefficient 1) would also have been handy. In modern language he could have said, “The product of two rational monic polynomials is not an integer polynomial if any of their coefficients are not integers.”

But this is not the most dramatic difference between Gauss’ statement (and proof – just give me a sec) and the “modern version.” On page 400 of Michael Artin’s Algebra textbook (which I can’t stop talking about only because it is where I learned like everything I know), we find:

(3.3) Theorem. Gauss’s Lemma: A product of primitive polynomials in \mathbb{Z}[x] is primitive.

The sense in which this lemma is Gauss’s is precisely the sense in which it is really talking about the contents of Article 42 from Disquisitiones which I quoted above.

Huh?

First of all, what’s \mathbb{Z}[x]? Secondly, what’s a primitive polynomial? Third and most important, what does this have to do with the above? Clearly they both have something to do with multiplying polynomials, but…

Okay. \mathbb{Z}[x] is just the name for the set of polynomials with integer coefficients. (Apologies to those of you who know this already.) So a polynomial in \mathbb{Z}[x] is really just a polynomial with integer coefficients. This notation was developed long after Gauss.

More substantively, a “primitive polynomial” is an integer polynomial whose coefficients have gcd equal to 1. I.e. a polynomial from which you can’t factor out a nontrivial integer factor. E.g. 4x^2+4x+1 is primitive, but 4x^2+4x+2 is not because you can take out a 2. This idea is from after Gauss as well.

So, “Gauss’s Lemma” is saying that if you multiply two polynomials each of whose coefficients do not have a common factor, you will not get a common factor among all the coefficients in the product.

What does this have to do with the result Gauss actually stated?

That’s an exercise for you, if you feel like it. (Me too actually. I feel confident that the result Artin states has Gauss’s actual result as a consequence; less sure of the converse. What do you think?) (Hint, if you want: take Gauss’s monic, rational polynomials and clear fractions by multiplying each by the lcm of the denominators of its coefficients. In this way replace his original polynomials with integer polynomials. Will they be primitive?)

Meanwhile, what I really wanted to show you are the two proofs. Original proof: ugly, long, specific, but containing only elementary ideas. Modern proof: cute, elegant, general, but involving more advanced ideas.

Here is a very close paraphrase of Gauss’ original proof of his original claim. Remember, P and Q are monic polynomials with rational coefficients, not all of which are integers, and the goal is to prove that PQ‘s coefficients are not all integers.

Demonstration. Put all the coefficients of P and Q in lowest terms. At least one coefficient is a noninteger; say without loss of generality that it is in P. (If not, just switch the roles of P and Q.) This coefficient is a fraction with a denominator divisible by some prime, say p. Find the term in P among all the terms in P whose coefficient’s denominator is divisible by the highest power of p. If there is more than one such term, pick the one with the highest degree. Call it Gx^g, and let the highest power of p that divides the denominator of G be p^t. (t \geq 1 since p was chosen to divide the denominator of some coefficient in P at least once.). The key fact about the choice of Gx^g is, in Gauss’s words, that its “denominator involves higher powers of p than the denominators of all fractional coefficients that precede it, and no lower powers than the denominators of all succeeding fractional coefficients.”

Gauss now divides Q by p to guarantee that at least one term in it (at the very least, the leading term) has a fractional coefficient with a denominator divisible by p, so that he can play the same game and choose the term \Gamma x^{\gamma} of Q/p with \Gamma having a denominator divisible by p more times than any preceding fractional coefficient and at least as many times as each subsequent coefficient. Let the highest power of p dividing the denominator of \Gamma be p^{\tau}. (Having divided the whole of Q by p guarantees that \tau \geq 1, just like t.)

I’ll quote Gauss word-for-word for the next step:

“Let those terms in P which precede Gx^g be 'Gx^{g+1}, ''Gx^{g+2}, etc. and those which follow be G'x^{g-1}, G''x^{g-2}, etc.; in like manner the terms which precede \Gamma x^{\gamma} will be '\Gamma x^{\gamma+1}, ''\Gamma x^{\gamma+2}, etc. and the terms which follow will be \Gamma'x^{\gamma-1}, \Gamma''x^{\gamma-2}, etc. It is clear that in the product of P and Q/p the coefficient of the term x^{g+\gamma} will

= G\Gamma + 'G\Gamma' + ''G\Gamma'' + etc.

+ '\Gamma G' + ''\Gamma G'' + etc.

“The term G\Gamma will be a fraction, and if it is expressed in lowest terms, it will involve t+\tau powers of p in the denominator. If any of the other terms is a fraction, lower powers of p will appear in the denominators because each of them will be the product of two factors, one of them involving no more than t powers of p, the other involving fewer than \tau such powers; or one of them involving no more than \tau powers of p, the other involving fewer than t such powers. Thus G\Gamma will be of the form e/(fp^{t+\tau}), the others of the form e'/(f'p^{t+\tau-\delta}) where \delta is positive and e, f, f' are free of the factor p, and the sum will

=\frac{ef'+e'fp^{\delta}}{ff'p^{t+\tau}}

The numerator is not divisible by p and so there is no reduction that can produce powers of p lower than t+\tau.”

(This is on pp. 25-6 of the Clarke translation.)

This argument guarantees that the coefficient of x^{g+\gamma} in PQ/p, expressed in lowest terms, has a denominator divisible by p^{t+\tau}. Thus the coefficient of the same term in PQ has a denominator divisible by p^{t+\tau-1}. Since t and \tau are each at least 1, this means the denominator of this term is divisible by p at least once, and so a fraction. Q.E.D.

Like I said – nasty, right? But the concepts involved are just fractions and divisibility. Compare a modern proof of “Gauss’ Lemma” (the statement I quoted above from Artin – a product of primitive integer polynomials is primitive).

Proof. Let the polynomials be P and Q. Pick any prime number p, and reduce everything mod p. P and Q are primitive so they each have at least one coefficient not divisible by p. Thus P \not\equiv 0 \mod{p} and Q \not\equiv 0 \mod{p}. By looking at the leading terms of P and Q mod p we see that the product PQ must be nonzero mod p as well. This implies that PQ contains at least one coefficient not divisible by p. Since this argument works for any prime p, it follows that there is no prime dividing every coefficient in PQ, which means that it is primitive. Q.E.D.1

Clean and quick. If you’re familiar with the concepts involved, it’s way easier to follow than Gauss’s original. But, you have to first digest a) the idea of reducing everything mod p; b) the fact that this operation is compatible with all the normal polynomial operations; and c) the crucial fact that because p is prime, the product of two coefficients that are not \equiv 0 \mod{p} will also be nonzero mod p.

Now Gauss actually had access to all of these ideas. In fact it was in the Disquisitiones Arithmeticae itself that the world was introduced to the notation “a \equiv b \mod{p}.” So in a way it’s even more striking that he didn’t think to use them here when they would have cleaned up so much.

What bugged me out and made me excited to share this with you was the realization that these two proofs are essentially the same proof.

What?

I’m not gonna spell it out, because what’s the fun in that? But here’s a hint: that term Gx^g that Gauss singled out in his polynomial P? Think about what would happen to that term (in comparison with all the terms before it) if you a) multiplied the whole polynomial by the lcm of the denominators to clear out all the fractions and yield a primitive integer polynomial, and then b) reduced everything mod p.

(If you are into this sort of thing, I found it to be an awesome exercise, that gave me a much deeper understanding of both proofs, to flesh out the equivalence, so I recommend that.)

* * * * *

What’s the pedagogical big picture here?

I see this as a case study in the value of approaching a problem with unsophisticated tools before learning sophisticated tools for it. To begin with, this historical anecdote seems to indicate that this is the natural flow. I mean, everybody always says Gauss was the greatest mathematician of all time, and even he didn’t think to use reduction mod p on this problem, even though he was developing this tool on the surrounding pages of the very the same book.

In more detail, why is this more pedagogically natural than “build the (sophisticated) machine, then apply it”?

First of all, the machine is inscrutable before it is applied. Think about being handed all the tiny parts of a sophisticated robot, along with assembly instructions, but given no sense of how the whole thing is supposed to function once it’s put together. And then trying to follow the instructions. This is what it’s like to learn sophisticated math ideas machinery-first, application-later. I felt this way this spring in learning the idea of Brouwer degree in my topology class. Now that everything is put together, I have a strong desire to go back to the beginning and do the whole thing again knowing what the end goal is. The ideas felt so airy and insubstantial the first time through. I never felt grounded.

Secondly, the quick solution that is powered by the sophisticated tools loses something if it’s not coupled with some experience working on the same problem with less sophisticated tools. The aesthetic delight that professors take in the short and elegant solution of the erstwhile-difficult problem comes from an intimacy with this difficulty that the student skips if she just learns the power tools and then zaps it. Likewise, if the goal is to gain insight into the problem, the short, turbo-powered solution often feels very illuminating to someone (say, the professor) who knows the long, ugly solution, but like pure magic, and therefore not illuminating at all, to someone (say, a student) who doesn’t know any other way. There is something tenuous and spindly about knowing a high-powered solution only.

Here I can cite my own experience with Gauss’s Lemma, the subject of this post. I remember reading the proof in Artin a year ago and being satisfied at the time, but I also remember being unable to recall this proof (even though it’s so simple! maybe because it’s so simple!) several months later. You read it, it works, it’s like poof! done! It’s almost like a sharp thin needle that passes right through your brain without leaving any effect. (Eeew… sorry that was gross.) The process of working through Gauss’ original proof, and then working through how the proofs are so closely related, has made my understanding of Artin’s proof far deeper and my appreciation of its elegance far stronger. Before, all I saw was a cute short argument that made something true. I now see in it the mess that it is elegantly cleaning up.

I’ve had a different form of the same experience as I fight my way through Galois’ paper. (I am working through the translation found in Appendix I of Harold Edwards’ 1984 book Galois Theory. This is a great way to do it because if at any point you are totally lost about what Galois means, you can usually dig through the book and find out what Edwards thinks he means.) I previously learned a modern treatment of Galois theory (essentially the one found in Nathan Jacobson’s Basic Algebra I – what a ridiculous title from the point of view of a high school teacher!). When I learned it, I “followed” everything but I knew my understanding was not where I wanted it to be. Here the words “spindly” and “tenuous” come to mind again. The arguments were built one on top of another till I was looking at a tall machine with a lot of firepower at the very top but supported by a series of moving parts I didn’t have a lot of faith in.

An easy mark for Ewoks, and I knew it.

This version of Galois theory was all based on concepts like fields, automorphisms, vector spaces, separable and normal extensions, of which Galois himself had access to none. The process of fighting through Galois’ original development of his theory and trying to understand how it is related to what I learned before has been slowly filling out and reinforcing the lower regions of this structure for me. Coupling the sophisticated with the less sophisticated approach has given the entire edifice some solidity.

Thirdly, and this is what I feel like I hear folks (Shawn Cornally, Dan Meyer, Alison Blank, etc.) talk about a lot, but it bears repeating, is this:

If you attack a problem with the tools you have, and either you can’t solve it, or you can solve it but your solution is messy and ugly, like Gauss’s solution above (if I may), then you have a reason to want better tools. Furthermore, the way in which your tools are failing you, or in which they are being inefficient, may be a hint to you for how the better tools need to look.

Just as an example, think about how awesome reduction mod p is going to seem if you are already fighting (as Gauss did) with a whole bunch of adding stuff up some of which is divisible by p and some of which is not. What if you could treat everything divisible by p as zero and then summarily forget about it? How convenient would that be?

I want to bring this back to the K-12 level so let me give one other illustration. A major goal of 7th-9th grade math education in NY (and elsewhere) is getting kids to be able to solve all manner of single-variable linear equations. The basic tool here is “doing the same thing to both sides.” (As in, dividing both sides of the equation by 2, or subtracting 2x from both sides…) For the kids this is a powerful and sophisticated tool, one that takes a lot of work to fully understand, because it involves the extremely deep idea that you can change an equation without changing the information it is giving you.

There is no reason to bring out this tool in order to have the kiddies solve x+7=10. It’s even unnatural for solving 4x-21=55. Both of these problems are susceptible to much less abstract methods, such as “working backwards.” The “both sides” tool is not naturally motivated until the variable appears on both sides of the equation. I used to let students solve 4x-21=55 whatever way made sense to them, but then try to impose on them the understanding that what they had “really done” was to add 21 to both sides and then divide both sides by 4, so that later when I gave them equations with variables on both sides, they’d be ready. This was weak because I was working against the natural pedagogical flow. They didn’t need the new tool yet because I hadn’t given them problems that brought them to the limitations of the old tool. Instead, I just tried to force them to reimagine what they’d already been doing in a way that felt unnatural to them. Please, if a student answers your question and can give you any mathematically sound reason, no matter how elementary, accept it! If you would like them to do something fancier, try to give them a problem that forces them to.

Basically this whole post adds up to an excuse to show you some cool historical math and a plea for due respect to be given to unsophisticated solutions. There is no rush to the big fancy general tools (except the rush imposed by our various dark overlords). They are learned better, and appreciated better, if students, teachers, mathematicians first get to try out the tools we already have on the problems the fancy tools will eventually help us answer. It worked for Gauss.

[1] This is the substance of the proof given in Artin but I actually edited it a bit to make it (hopefully) more accessible. Artin talks about the ring homomorphism \mathbb{Z}[x] \longrightarrow \mathbb{F}_p[x] and the images of P and Q (he calls them f and g) under this homomorphism.

ADDENDUM 8/10/11:

I recently bumped into a beautiful quote from Hermann Weyl that I had read before (in Bob and Ellen Kaplan’s Out of the Labyrinth, p. 157) and forgotten. It is entirely germane.

Beautiful general concepts do not drop out of the sky. To begin with, there are definite, concrete problems, with all their undivided complexity, and these must be conquered by individuals relying on brute force. Only then come the axiomatizers and systematizers and conclude that instead of straining to break in the door and bloodying one’s hands one should have first constructed a magic key of such and such a shape and then the door would have opened quietly, as if by itself. But they can construct the key only because the successful breakthrough enables them to study the lock front and back, from the outside and from the inside. Before we can generalize, formalize and axiomatize there must be mathematical substance.

The History of Algebra, part I: Negative Numbers

This is the post I promised over a month ago on two landmark books in the history of algebra:

Kitab al-Jabr wa-l-Muqabala, aka The Compendium on Calculating by Completion and Reduction
by Muhammad ibn Musa al-Khwarizmi

and

Ars Magna, aka The Great Art, or The Rules of Algebra
by Girolamo Cardano

A lot can be and has been said about these books. I’m going to zero in on one particular story they tell:

Take-home lesson #1: the mathematical world’s understanding of negative numbers came incredibly slowly, in very gradual stages. We tend to treat learning about negatives like there’s just one big idea to understand. Really, there are like twenty.

Reading these books has given me more respect than ever for the depth of the process we ask kids to go through between sixth and ninth grade as they get comfortable working with negatives.

Take-home lesson #2: In the process of understanding a new and difficult idea, the ability to understand and use the idea to answer a question comes way before the ability to pose a question about the idea. So, it makes sense to get very comfortable with -2 as the answer to 5-7 before ever asking yourself to add -2 to something.

Take-home lesson #3: The development of algebra is an important motivator, historically anyway, for the development of negatives.

Pedagogical idea: How can we use this historical motivation to develop negatives with students?
a) Al-Khwarizmi’s book contains a very limited idea of negativeness: that which has been subtracted. But since he is thinking about how to multiply, for example, an unknown with 2 subtracted, from the same unknown with 3 subtracted, he needs to see that, once everything has been distributed, the product of the subtracted 2 and the subtracted 3 contribute an added 6 to the total. It is not immediately obvious how this becomes a classroom activity but I think it definitely can. 20*20 is 400; how does taking away 2 from one of the factors and 3 from the other affect the product? We get kids thinking hard about this and it would support the most contrivance-free explanation for why (neg)(neg)=(pos) that I have ever seen.
b) Allowing the coefficients of equations to be negative significantly cleaned up the theory of equations. Our students know more about negatives than the inventors of algebra did. It might be really exciting and powerful, increasing their appreciation for both negatives and quadratics, to show or let them develop the original (negative-impaired) theory of quadratics, and then have them use negatives to clean it up.

* * * * *

The first of these books was written in Arabic, and published around 820 in Baghdad. 820. Just to make sure you didn’t miss that. The translation I read is 180 years old. The full text of it is available online.

What I am even more anxious to make sure you didn’t miss are certain bits of the author’s name and the book title. The author is Muhammad Ibn Musa Al-Khwarizmi. (Muhammad, son of Musa, from Khwarizm.) He is often referred to just as Al-Khwarizmi. This is the origin of the English word “algorithm.”

And as if that weren’t awesome enough, the “completion” in the title is the Arabic word “al-jabr.” This is the origin of the English word “algebra.”

The second book was written in Latin and published in 1545 in Renaissance Italy. I read a 1968 translation by T. Richard Witmer. I can’t find it online, but in case you read Latin, here is a pdf of the original. Its distinction historically is that it was the first publication of a general method for solving what we would now call cubic and quartic equations. (Cardano attributes the solution for one class of cubics to Niccolo Tartaglia and Scipione del Ferro, the generalization of this solution to other classes of cubics to himself, and the solution of quartics to his student Lodovico Ferrari.)

Both these books have been widely written about. I was reading them in the hopes of learning how these mathematical breakthroughs were understood in their own day. My original intent was to use this information to help me design the group theory course I am teaching. We are getting into the Galois theory of equations. The modern treatment of this subject, which is what I learned, doesn’t feel to me like it could serve as the basis of a natural and meaningful development for people who don’t already know it. (The way I learned it, which is how almost everybody learns it in this day and age, was the opposite of “natural” or “meaningful.” Very cool, but only in an after-the-fact sort of a way. Like you came to the show at the very end and only saw the climactic scene, and everybody in the audience gasped and shrieked, except you because you didn’t care about any of the people in the show because you just walked in one second ago. And then after it was over your friend explained to you what had been going on and you understood why the other audience members cared, and were kind of mad you hadn’t gotten to watch the rest of the play first. If that analogy made any sense.) My idea was, let me study how the theory of polynomial equations developed over time; then I’ll be able to put the class in the place of the developers of the theory, and so the insights will come about naturally, and make sense, and be compelling, against the backdrop of the questions they were designed to answer. Learning the historical context would be pedagogically fertile.

As it happened, I overshot the historical mark a bit – for the purposes of the class, the mindframes of these two books are unnecessarily archaic. There is real pedagogical fertility here, but it’s around ideas that the participants in my class (who are teachers and mathematicians) already understand.

On the other hand, I’ve taught plenty of students who don’t understand them. In particular, I found myself surprised and intrigued by what each book did and did not say about negative numbers. I felt like I was watching this idea (the negative) coalesce and congeal, roughly and haltingly, over time. Like a churning mixture of hard crystal clarity and murky goo. If I may.

Though separated by 700 years, both books find it necessary to give three different quadratic formulas. Because, you see, you need a different method to solve

x^2 + 10x = 20

than to solve

x^2 = 10x + 20

or

x^2 + 20 = 10x.
(Actually, this notation is anachronistic. Neither author uses anything resembling modern notation. Muhammad Ibn Musa writes everything in prose. For the first of these equations, for instance, he would write, “A square and ten roots equal 20 dirhems.” Dirhems are an Arabic unit of currency.)

We think of there being only one quadratic formula because we are comfortable moving everything to the left; the only difference a modern reader can see between these equations is a difference in the signs of the coefficients:

x^2 + 10x - 20 = 0

x^2 - 10x + 20 = 0

etc. And all the equations can be solved exactly the same way. But for neither of these authors had the idea of negativeness grown adequately supple to make this possible.

Since Cardano presents the full algebraic solution to cubic equations, the situation is even more extreme in Ars Magna. Each of the following gets its own chapter:
“On the cube and first power equal to the number”
“On the cube equal to the first power and number”

“On the cube, first power, and number equal to the square”
“On the cube, square, and number equal to the first power”
These are the first two and last two in a sequence of 13 chapters. This is over 20% of the book. Not only does each equation type get its own method of solution, each method gets its own (geometric) proof.

Histories of mathematics often mention the situation I’ve described here. For example, Mactutor’s history of quadratic, cubic and quartic equations says something like “the different types arise because Al-Khwarizmi had no zero or negatives.” This is the story I’d gotten before I picked up the originals, and what I found out is that it’s not true.

Both books calculate comfortably with something translated as “negative numbers”. Ars Magna goes so far as to contain a calculation with imaginaries. But the scope of the idea of negativeness is limited, in a different way, in each book. And I think I learned something important about how people come to understand negative numbers by taking note of these limitations.

In Muhammad Ibn Musa’s work, a “negative” is a number that’s been subtracted from another number. That’s it; that’s all it is. But this is enough to justify all the rules of arithmetic with negatives that we teach middle schoolers, because Muhammad makes use of all of them:

If there are greater numbers combined with units to be added to or subtracted from them, then four multiplications are necessary; namely, the greater numbers by the greater numbers, the greater numbers by the units, the units by the greater numbers, and the units by the units.

He is talking about FOIL in case that wasn’t clear.

If the units, combined with the greater numbers, are positive, then the last multiplication is positive; if they are both negative, then the fourth multiplication is likewise positive. But if one of them is positive, and one negative, then the fourth multiplication is negative.

This is on pp. 21-22. Elsewhere, he fluently adds and subtracts these “negative” (i.e. subtracted) quantities. For example, on p. 27,

The root of two hundred, minus ten, subtracted from twenty minus the root of two hundred, is thirty minus twice the root of two hundred; twice the root of two hundred is the root of eight hundred.

In other words,

20 - \sqrt{200} - \left(\sqrt{200}-10\right) = 30 - 2\sqrt{200}

My point is that Ibn Musa’s use of the idea of negativeness is so limited in scope that the word “negative” might even be sort of a mistranslation to a modern reader; however, this limited-scope idea fully supports all the rules of arithmetic we teach.

Cardano’s understanding of negativeness is much broader. For example, in the first chapter of the book, he explicitly discusses the possibility that a negative number might satisfy an equation. But throughout, his dealings with negatives are marked by a kind of choppiness, an inconsistency. Firstly, he refers to negative solutions to equations as “false” or “fictitious” (as opposed to “true”). Then, once he gets into the nitty gritty of solving equations, he pretty much stops mentioning them entirely. For example in chapter 8 he says “it is evident that when the middle power is equal to the highest power and the constant, there are necessarily two solutions…” We would say there are three (1 negative), and Cardano would have acknowledged this third solution in chapter 1.

What Cardano virtually never does with negatives (the one exception is below) is treat them like they can be coefficients. Solutions, but not coefficients: i.e. a negative number can be the answer to a question I asked but they can’t be the language in which the question is posed. Most of the time, the idea of working with negative coefficients appears simply to not occur to him. On one occasion, the spectre is invoked only to be dismissed (for reasons that are opaque to me). Cardano is discussing positive and negative solutions to equations in which a power equals a certain number. (I.e. solvable by the simple extraction of one root.)

It is always presumed in this case, of course, that the number to which the power is equated is true and not fictitious. To doubt this would be as silly as to doubt the fundamental rule itself for, though opposite reasoning must be observed in opposite cases, the reasoning is still the same. p. 11

What??

The point that I am making is that if Cardano is any example, negatives are much easier to get your head around as an answer than as part of the question. Allowing coefficients to be negative would have caused a massive increase in the efficiency of the theory: as noted above, Cardano gave separate solutions for thirteen forms of cubic equations. With negative coefficients, these thirteen cases are reduced to 2: quadratic term is zero vs. nonzero. I don’t know when this cleaning-up of the theory actually historically took place. Avital Oliver, whom I mentioned in my last post, told me that noticing how much negative coefficients would simplify the theory of equations was a major reason, historically, that negative numbers gained acceptance as numbers. That makes sense to me.

The one moment in the book where the idea of a negative number is entertained as part of the statement of a problem is in the absolutely fascinating chapter 37, On the Rule for Postulating a Negative:

This rule is threefold, for one either assumes a negative, or seeks a negative square root, or seeks what is not. p. 217

Cardano is being highly speculative here. He seems to think maybe the entire chapter he’s writing is crazy talk. He begins by considering equations with negative solutions. Even though he already spent chapter 1 talking about negative solutions, he feels the need to justify them here. He notes that

x^2 = 4x + 32

and

x^2 = x + 20

don’t appear to have a common solution, since 8 solves the first while 5 solves the second. However, the “turned-around” equations

x^2 + 4x =32

and

x^2 + x = 20

do have a common solution, namely 4. In chapter 1, Cardano asserted that a quadratic and its “turnaround” have opposite solutions: a “true” (positive) solution for one is a “fictitious” (negative) solution, equal in magnitude, for the other. So here, the original pair of equations have a common solution after all: -4. Cardano seems to feel (and I kind of relate) that the presence of the common positive solution between the turned-around equations and the formal relationship between the turned-around pair and the original pair means there ought to be a common solution for the original pair; the fact that this common solution turns out to exist if you allow negative solutions is then a reason to believe in negative solutions.

Anyway, he follows with two problems about the property of a man named Francis. The problems are totally contrived but they lead to negative solutions for Francis’ property, which he interprets as meaning that Francis has debt. Tellingly, though, he sets up the equations letting -x be Francis’ property, so that the equations he actually solves have positive solutions.

Then, he poses a problem that has no positive or negative solution: divide 10 into two parts whose product is 40. He follows the procedure he uses on comparable problems with real solutions (e.g. divide 10 into two parts whose product is 21): “… it is clear that this case is impossible. Nevertheless, we will work thus:…” (p. 219). The procedure forces him to subtract 40 from 25 and then take the square root of this. He already seems dubious about the subtraction 25-40:

The square root of the remainder, then - if anything remains - added to or subtracted from [five] shows the parts. But since such a remainder is negative, you will have to imagine \sqrt{-15}. p. 219

Note the “if anything remains.” So this “square root of a negative” business is a bunch of new hooey built on something that might be hooey to begin with. In that context it almost feels like what we’d now call imaginaries (and what Cardano calls “the sophistic negative”) are only a comparatively small speculative step beyond the craziness of negative numbers in the first place. The whole chapter has this I-know-this-is-complete-madness-but-I’m-just-gonna-do-it tone. A famous passage:

... you will have that which you seek, namely 5 + \sqrt{25-40} and 5 - \sqrt{25-40}, or 5 + \sqrt{-15} and 5 - \sqrt{-15}. Putting aside the mental tortures involved, multiply 5 + \sqrt{-15} and 5 - \sqrt{-15}, making 25 - (-15) which is +15. Hence this product is 40... So progresses arithmetic subtlety the end of which, as is said, is as refined as it is useless. p. 219-220

(As above, the notation here is anachronistic; but the translation I read modernized all Cardano’s notation for ease of reading.)

It is in this wildly speculative chapter that Cardano – for the only time in the book – suggests a problem posed in terms of negatives:

... If it be said, Divide 6 into two parts the product of which is 40, the problem is one of the sophistic negative... But if it is said, Divide 6 into two parts the product of which is -40, or divide -6 into two parts producing -40, in either case the problem will be one of the pure negative... and the parts will be those that have been given [10 and -4, or -10 and 4]. If it be said, Divide -6 into two parts the product of which is +24, the problem will be one of the sophistic negative. pp. 220-221

What am I getting at with all this? Well I can’t tell you what to think but I am left with a completely new sense of the natural contours of learning about negatives.

I taught Algebra I for a long time. My students entered the class having trouble both conceptually and computationally with negative numbers. I did my duty and explained their meaning and operation, along with lots of practice for the kiddies, early in the year. Having always been concerned with understanding, I looked for models of negatives that would support all the operations I wanted kids doing. I wanted the model to instantiate as much of the mathematical structure as possible. The school I taught at had a woodshop program, and I got them to build me a board with a flat surface with holes cut in it and wooden pucks to fill the holes, so that I could physically model 1 + -1 = 0 and people would physically see how a hole combined with a wooden puck to make a flat surface. Subtraction of negatives would become removing holes, and this clearly required adding pucks to the surface; thus subtraction of negatives is adding positives. The model required another layer of contrivance to support multiplication: I had to ask students to imagine standing upside down, on the other side of the surface, so the holes became pucks and the pucks could be imagined as holes; then 3*-4 could be 3 people with the normal point of view, each standing by 4 holes, while -3*4 was 3 upside down people each standing by what appeared to them as 4 pucks.

It didn’t work as the centerpiece of teaching about positives and negatives. The multiplication problems make the contrivance really obvious, but actually there’s a certain amount of contrivance even in how it models addition. If I combine some pucks and some holes, who says that the pucks need to fall into the holes? I made kids draw tons of pictures of the whole thing, which completely wore them out, and I don’t know how much it added to their understanding. Meanwhile, the model, as all models do, made problems bigger, clunkier. Subtracting -5 from -7 was no thing: just fill 5 holes. But subtracting (-5) from 1 was like a whole production. The kids either needed to create 5 holes by removing pucks from them (and retaining the pucks – why would you do either thing?) before adding 5 new ones to fill the holes, or they needed to make the intensely abstract and not-adequately-justified leap that because subtracting -5 amounted to adding +5 when you were subtracting from a negative, the same thing should be true when subtracting from a positive. Retrospectively the fact that I asked my kids to make this leap of faith and told myself that I was actually helping them understand how math makes sense is kind of embarrassing.

But the thing is, as models go, I’ll stand behind this one as one of the better ones. I’ve seen cuter models for multiplication, e.g. on the wall of the classroom of my first former student to become a math teacher (yes I am now old enough for that to happen):
Do you LOVE to LOVE? You’re a LOVER.
Do you LOVE to HATE? You’re a HATER.
Do you HATE to LOVE? You’re a HATER.
Do you HATE to HATE? You’re a LOVER.
But none of these cuter models supports addition or subtraction as well, and sometimes it’s hard to see that they are even related to multiplication. Meanwhile, the only model I’ve ever seen, besides mine, that supports all four operations is the IMP curriculum‘s “hot and cold cubes.” And if you see the contrivance and unnaturalness in what I described above, “hot and cold cubes” is another level. Again, I think it’s kind of a brilliant model. But if you’ve ever tried to use it with low-skilled kids, you know how much production is involved in even getting them to imagine and buy into the scenario in the first place, let alone use all that machinery to solve problems.

It’s been a few years now that it’s seemed clear to me that the whole idea of teaching negatives through a particular model is not the way to go. People who use negatives effectively have gotten them down to a very slim abstract notion that supports all their operations and all their uses as representations of real things. (I would describe my own understanding with words like “opposite directionyness” – don’t laugh.) Teaching has to be aimed at this slim, efficient understanding as an end product. Forcing kids to engage with a whole clunky megilla of story and visual image every time they want to do a computation with negatives can’t possibly be the right path.

In more recent years I’ve found much more effective ways to teach negatives. I’ve been beginning by brainstorming with my students what negative numbers are actually, in the real world, used to represent. Not just debt, temperature and elevation. These aren’t enough. They capture the “below zeroiness” but not the “opposite directionyness,” since the positive direction is so fixed in each case. Also needed are examples of net change: gain or loss of money by a business; football yardage; etc. Furthermore, examples where negatives are used to specify direction in space or time: say uptown is positive; what would negative mean? What if east were positive? What if downtown were positive? If positive 3 means the space shuttle took off 3 seconds ago, what would -3 mean?

Using this conversation as groundwork has brought me much more success than the wooden board did, but there’s still something missing. It’s hard to find convincing examples familiar to kids that support multiplication, for one thing, except for the private tutoring student whose father was a stockbroker, because then short-selling a stock that goes down in price is (neg)(neg) = pos. But it’s more fundamental than that. I’ve still been starting from the question “what is a negative?” when the student’s only legitimate reason to believe negatives even exist is that school says so and her only legitimate reason to care is that she’ll be accountable for an answer.

This question puts the cart before the horse. A corollary of that amazing conversation with Avital Oliver I described last time is that when I teach a new idea I want to cause it to be needed, or at least cause its presence to be felt, cause students to become aware of it in the room with them, before it is ever named. So “what is a negative?” is not ultimately my desired opener for teaching about negatives.

What I’m left with after reading Cardano and Muhammad Ibn Musa is the beginning of an idea, modeled on the history of the concept itself, for what could take its place. So, here’s a curriculum brainstorm. It spans a lot of years and doesn’t fit in with anybody’s state frameworks, so I hope you’ll forgive the impracticality. I’m just fantasizing.

First, laying the groundwork (inspired by Ibn Musa): When you do arithmetic, how does subtracting something from the numbers affect the answer? How does 20 + 10 change if I subtract 3 from the 10? (To focus attention on the key point, what does the subtracted 3 do to the answer?) How does 20 – 10 change? How does 20*10 change? How does 20*10 change if I subtract 4 from the 20? How about if I subtract 4 from the 20 and 3 from the 10? What if I add 4 to the 20 and subtract 3 from the 10? The point is to engage the students in sorting out all these questions. (Why would they care about these questions? That’s a whole other thing but I don’t think a very hard one, and it will depend on the group of students – but I’m sure given any set of folks we can find a context to make these questions compelling.) Note that there is no “new kind of number” here. Some 3’s are subtracted, some added, that’s all. We very gently call their attention to the “subtracted 3” as an object worth talking about, but they already know what we mean; there’s nothing new to learn. I think this sorting-out is going to attune students’ antennae to the frequency in the universe on which negative numbers live.

Much later, once negatives come into play, stay respectful of the fact that they make sense as answers more easily than they make sense as questions. What number could you add to 7 and get 4? (No number! Even if you add nothing, it’s still 7.) If you could add something, what would that thing be like? In other words, bring forth the idea of negativeness as the answer to questions. (Perhaps your earlier “subtracted 3” will be what they come up with; perhaps not.) Do a lot and a lot and a lot of this, before ever asking anybody anything about negatives.

Later still, it will be time to develop equation solving intently. The way we do this in Algebra now, we build in the necessity for the methods to generalize to negative coefficients. Instead, start it earlier and use Muhammad Ibn Musa-typed problems. Let them develop techniques that feel most natural to them. (From lots of classroom experience, I can tell you that these will not be methods that generalize to negative coefficients.) Allow problems with negative solutions to creep in, but not negative coefficients. Negative numbers and their operations are becoming familiar, but still let the students do what’s comfortable in the realm of equation solving. Increase the sophistication of the equations; develop the solution of one of the three forms of the quadratic (what number can I multiply by itself, and then add 6 of itself, to get 40?). Pose problems in the other forms as well though. Finally, as a last act, lead them to the fact that allowing coefficients to be negative unifies all three cases of the quadratic into one and they can use a single method on all problems. How useful! Negatives are now official.

I would really love to do this with an out-of-school math circle of youngish kids or mathphobic adults. I need to get on that.

* * * * *

Two tidbits from these books that didn’t fit in with the main lines of thought above. There’s lots more where these came from but as usual I’ve already OD-ed so I have to draw the line somewhere.

a) Muhammad Ibn Musa gives a beautiful, though not rigorous, justification for the circle area formula that I’ve never seen before. He expresses the circle’s area as half its circumference times half its diameter. He explains that this is true because any regular polygon has an area equal to half its circumference times half the diameter of the inscribed circle. (Draw lines from the center to every vertex, and think about the areas of the triangles you get, to see that this is true.)

b) Cardano says something really darling about the solution of the cubic, that I just found delightful and have to share:

In our own days Scipione del Ferro of Bologna has solved the case of the cube and first power equal to a constant, a very elegant and admirable accomplishment. Since this art surpasses all human subtlety and the perspicuity of mortal talent and is a truly celestial gift and a very clear test of the capacity of men's minds, whoever applies himself to it will believe that there is nothing that he cannot understand. p. 8