Hidden Figures: Visibility / Invisibility of Brown Brilliance, Part I

Has everybody seen Hidden Figures yet?

It’s delightful: a tight, well-acted, gripping drama, based on a true story about an exciting chapter in national history. You can just go to have a good time. You don’t need to feel like you are going to some kind of Important Movie About Race or whatever. It is totally kid friendly, and as long as they know the most basic facts about the history of racial discrimination, it doesn’t force you to have any kind of conversation you aren’t up for / have every day and don’t need another… / etc. Just go and enjoy yourself.


Everybody, parents especially, and white parents especially, please go see this film and take your kids.

I was actually fighting back tears inside of 5 minutes.

Long-time readers of this blog know that I am strongly critical of the widespread notion of innate mathematical talent. I’ve written about this before, and plan on doing a great deal more of this writing in the future. The TL;DR version is that I think our cultural consensus, only recently beginning to be challenged, that the capacity for mathematical accomplishment is predestined, is both factually false and toxic. My views on the subject can make me a bit of a wet blanket when it comes to the representation of mathematical achievement in film – the Hollywood formula for communicating to the audience that “this one is a special one” usually feels to me like it’s feeding the monster, and that can get between me and an otherwise totally lovely film experience.

In spite of all of this, when Hidden Figures opened by giving the full Hollywood math genius treatment to little Katherine Johnson (nee Coleman), kicking a stone through the woods while she counted “fourteen, fifteen, sixteen, prime, eighteen, prime, twenty, twenty-one, …,” I choked up. I had never seen this before. The full Good Will Hunting / Little Man Tate / Beautiful Mind / Searching for Bobby Fischer / Imitation Game / etc. child-genius set of signifiers, except for a black girl!

What hit me so hard was that it hit me so hard. For all the brilliant minds we as a society have imagined over the years, how could we never have imagined this one before now? And she’s not even imaginary, she’s real! And not only real, but has been real for ninety-eight years! And yet this is something that, as measured by mainstream film, we haven’t even been able to imagine.

You’ll do with this what you will, but for me it’s an object-lesson in the depth and power of our racial cultural programming, as well as a step toward the light. I am a white person who has had intellectually powerful black women around me, whom I greatly admired, my whole life, starting with my preschool and kindergarten teachers, and including close friends and members of my own family, as well of course as many of my students. And yet the type of representation that opened Hidden Figures is something that only fairly recently did it begin to dawn on me how starkly it was missing.

So, go see this movie! Take your kids to see it! Let them grow up easily imagining something that the American collective consciousness has hidden from itself for so long.


The History of Calculus / Honor Your Dissatisfaction

I was just rereading an email exchange with a friend (actually the O of this post), and found that I had summarized the history of calculus from the 17th to 20th centuries, up through and including Abraham Robinson’s invention of nonstandard analysis, in the form of a short play! I’m sharing it with you.

Mainly this is for fun, but it’s also part of my ongoing campaign promoting the value of honoring your dissatisfaction. The dialectic between honoring our impulse to invent ideas to understand the world better and honoring our dissatisfaction with these ideas is where mathematics comes from.

Here’s the play!

The History of Calculus, in 4 Extremely Short Acts

Featuring a lot of oversimplification and a certain amount of harmless cursing

Act I

Late 17th century

Leibniz, Newton: Look everybody, we can calculate instantaeous speed!

Everybody: How??

Leibniz: well, you consider the distance traveled during an infinitesimal interval of time, and you divide distance/time.

Everybody: Leibniz, what do you mean, “infinitesimal”? Like, a millisecond?

Leibniz: No, way smaller than that.

Everybody: A nanosecond?

Leibniz: Nah, dude, you’re missing the point. Smaller than any finite amount.

Everybody: So, zero time?

Leibniz: No, bigger than that.

Some people: Oh, cool! Look we can use this idea to accurately calculate planetary motion and stuff!

Other people: WTF are you talking about Leibniz? That makes no effing sense.

Act II

18th century

Bernoullis, Euler, Lagrange, Laplace, and everybody else: Whee, look at everything we can calculate with Newton and Leibniz’s crazy infinitesimals! This is awesome!

Bishop George Berkeley: But nobody answered the question of WTF they are even talking about. “What are these [infinitesimals]? May we not call them the ghosts of departed quantities?”

Lagrange: Hold on, let me try to rebuild this theory from scratch, I will make no mention of spooky infinitesimals, and will do the whole thing using the algebra of power series.

Everybody: Cool, good luck with that.


19th century

Cauchy: Lagrange, homie, it’s not gonna work. e^{-1/x^2} doesn’t match its power series at zero.

Lagrange: Sh*t.

Everybody: I think we don’t actually understand this as well as we thought we did.

Ghost of departed Bishop Berkeley: OMG I HAVE BEEN TRYING TO TELL YOU THIS.

Cauchy: How about we forget the whole “infinitesimal” thing and just say that the average speeds are approaching a certain limit to whatever desired degree of accuracy. As long as we can identify the limit and prove that it gets as close as we want it to, we can call that limit the “instantaneous speed” without ever trying to divide some spooky infinitesimals by each other.

Everybody: Awesome.

Weierstrass: I have an even better idea. Let’s formalize Cauchy’s thinking into some tight symbols and quantifiers. “Let us say that the limit of a function f(x) at c is a number L if for every \varepsilon > 0 there exists a \delta > 0 such that whenever 0 <|x-c|<\delta, it follows that |f(x)-L|<\varepsilon…”

All the mathematicians: AWESOME. Down with spooky infinitesimals! Calculus can be built soundly on the firm footing of “for any \varepsilon>0 there exists a \delta>0 such that…” and you never have to talk about any spooky sh*t!

All the mathematicians, in private: … but thinking about infinitesimals sure streamlines some of these calculations…

[Meanwhile all the physicists and engineers miss this whole episode and continue blithely using infinitesimals.]

Act IV

20th century

Scene i

Mathematicians: Infinitesimals are satanic voodoo!

Physicists and engineers: What are you talking about, what about CALCULUS?

Mathematicians: Whatever dude, don’t you know about Weierstrass and \varepsilon and \delta?

Physicists and engineers: Um, no, and I don’t care either! What’s the point when everything already works fine?

Mathematicians, in public: No, dude, there are all these tricky convergence issues and you will F*CK UP EVERYTHING IF YOU’RE NOT CAREFUL!

Mathematicians, in private: … but those infinitesimals are indispensible as a heuristic guide…

Scene ii

Abraham Robinson: Um, whatever happened to infinitesimals?

Mathematicians: I mean we rejected them as satanic voodoo because nobody was ever able to tell us WTF THEY ARE.

Robinson: I have a proposal. How about we consider them to be [fancy-*ss definition based on formal logic and other fancy sh*t]. Would you say that constitutes an answer to “wtf they are?”

Mathematicians: … why, yes!

Some mathematicians: omg awesome I can now RESPECTABLY use infinitesimals in calculations, I don’t have to hide anymore!

Other mathematicians: Whatever, I have no need to do the work to master this fancy sh*t. It doesn’t do anything good ole’ Weierstrass \varepsilon and \delta couldn’t do.

Physicists and engineers: wow, you guys are way over-concerned with the little stuff. Literally.


(Long-time readers of this blog will recognize the bit of dialogue with Leibniz from something I shared long ago.)

The point is that the whole episode is driven by uncertainty about what is even being discussed. The early developers of calculus shared the conviction that there was something there when they talked about “infinitesimals”, but none of them (not even Euler) gave a definition that was satisfying to everybody at the time (let alone to a modern audience). But this encounter, between the intuition that there’s something there and the insistence of the world to honor its dissatisfaction until a really satisfying account was given, was a generative encounter, resulting in several hundred years’ worth of powerful math progress.

So. Honor your dissatisfaction.


I got home last night from a week and a half of traveling to find a newspaper clipping my mother sent me: the obituary for Bill Thurston in the New York Times. I hadn’t know he was sick.

Thurston was a giant of twentieth century geometry, but more important to me is the sense I always get from his writing – a sense of warmth, the intention to share, an abiding interest in math as a human practice. A complete lack of interest in the privilege of being seen as brilliant. A desire to demystify the process of mathematical discovery.

I spent some time today looking for online tributes. Justin Lanier wrote a beautiful one. My desire to refer you to this was the impulse that prompted this post.

Here’s something else beautiful: Thurston’s profile on MathOverflow, linking to the questions he asked and answered on that site.

And for math and art enthusiasts, here’s some high fashion inspired by Thurston’s work.

Rest in peace Bill Thurston.

Honor your Dissatisfaction

Two things I forgot to say last night.

I. The reason I’m excited about the idea of having my class use its own self-made definitions to try to prove things is not just, or even primarily, because it will help them realize the inadequacies in their definitions. Although it will do that for sure. Even more than that, it seems to me the perfect way to support them in coming up with better definitions. This is what happened to Cauchy: he defined the limit verbally and a little vaguely, but then when he actually tried to use his definition to prove things, he started writing down precise inequalities. He didn’t have a teacher around to point out that this meant he should probably revise his definition, but my class does.

II. Yesterday when I asked my class to try to make a precise definition for what it means to converge, or for something to have a limit, some of them who took real analysis long ago began accessing this knowledge in an incomplete way. They started to talk about \epsilon and \delta, but in vague, uncertain terms. It looked as though others might possibly accept the half-remembered vagueries because they seemed like they might be the “this is supposed to be the answer” answer. I had to prevent this. (The danger would have been even greater if these participants had correctly and confidently remembered the definition.) I stepped in to the conversation to say, yes, that thing you’re half-remembering is my objective, but what’s going to make you understand it so you never forget it again is to fight till you’re satisfied we’ve captured the meaning of convergence. You can either fight with the definition you half-remember or you can fight to build a new definition, but you have to go through your dissatisfaction to get there. You have to air all this dissatisfaction.

Afterward, I thought of a better language. I’ll give this to them next time.

Honor your dissatisfaction.

Dissatisfaction is the engine that created analysis. This content, more than any other content, is both confusing and pointless if you bury your dissatisfaction rather than allowing it to thrive and be answered. The primary virtue of the tools of analysis is that they are satisfying. Only if you bring forth your dissatisfaction will this content have a chance to show you its value. So. Honor your dissatisfaction. It is the engine that will move us forward.

Over the Course of an Instant…

As you may recall, I’m teaching analysis to this class of teachers, developing the \epsilon\delta limit. Two weeks ago I bewildered everybody. Last week and this week, I set out to bewilder everyone even further.

Let me say what I’m going for here. The \epsilon\delta limit is a notoriously difficult definition.1 How to scaffold my class to handle this difficulty? I am banking on the following strategy: make them need the definition. Make them unsatisfied with anything less. Continue poking holes in their current understanding, continue showing them inconsistencies between what they believe and the language they have to describe it, till they have no choice but to try to build something new. Then, let them try to build it. If they build the very thing I’m going for, rejoice. If they build something equally precise and powerful, rejoice. If they cannot build either (the most likely outcome, since the “right answer” took the world mathematical community 150 years to come up with), then it will still make powerful sense to them because it satisfactorily answers a question they were already engaged in trying to answer. That’s the plan anyway.

I will leave you with the two problem sets from the last class, and the readings and presentation from this one. I am very proud of the presentation. After that, I’ll write down one new thought for where to take this.

We engaged people’s attempts to define infinite decimals from the previous class, then abruptly shifted topics:

I let them work long enough so everyone got to do the first section of problems. My goals were:

1) Make participants recognize that they believe the speed of a moving object is something that exists in a particular moment of time.
2) Make them recognize that their naive definition of speed (distance / time) doesn’t actually handle this case.
3) Realize that we thus have a similar definitional problem as with repeating decimals.

We got this far. Then, with just 7 or so minutes left, I gave them another problem set:2

This problem set was designed to get somebody who has never studied calculus basically to take a simple derivative, to bring them into the conversation, and to refresh everyone else’s memory about the basic idea of derivatives. The last problem was on there just so that the calculus folks had a challenge available if they wanted it. Anyway, I had people finish the “Algebra Calisthenics” and “Speed” sections for homework.

This class, we began by engaging this homework, getting a feel for the standard calculus computation in which you identify the speed of an object in a moment as the value toward which average speeds seem to be headed as you look at smaller and smaller intervals. Then we began to press on what this really means.

I handed out a xerox of the scholium from the end of the first section of Book 1 of Newton’s Principia. (The last page of this pdf.) This is where Newton tries to explain what the hell he’s even talking about. I directed their attention to this telling sentence:

An in like manner, by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities, not before they vanish, nor afterwards, but with which they vanish.

Then, I showed them the following presentation. Wanting to share this with you is the real reason for this blog post. I had a lot of fun making it.

What’sCalculusReallyDoing (as pdf)

What’sCalculusReallyDoing (as powerpoint)

Then I passed out a choice excerpt from the awesome criticism of early calculus by Bishop George Berkeley. (Specif, section XIV.)

I asked for the connection between the definitional problem we have here and the definitional problem we had 2 classes ago regarding infinite decimals. (“They both involve getting closer and closer to something but never getting there.”) Then I asked them to try to come up with definitions to address these problems.

This is such a non-sequitur but here’s my one additional thought. I’ve been thinking about how to push participants to recognize a definition as unsatisfying. Tonight, reading Judith Grabiner’s 1983 essay in the AMM about Cauchy and the origins of the \epsilon\delta limit (here it is as a pdf), I had an idea that is totally new to me. Retrospectively I think it’s sort of obvious, but I totally never thought of it before:

To get people to recognize that a definition is mathematically inadequate, have them try to use the definition, for example to prove something! In my case, all of them think that 1/3 = 0.333… Great. So, if we have a candidate definition of the meaning of limits or convergence, can we use it to prove 1/3 = 0.333…? If not, maybe we need a better definition.

(I had this idea when I read Grabiner’s statement that thought Cauchy gave the definition of the limit purely verbally and a bit vaguely, he translated it into the more rigorous language of inequalities when he actually started using it to prove theorems.)

[1] This is for at least 2 distinct (though related reasons): first of all, it’s got three nested quantifiers. “For all \epsilon>0, there exists a \delta>0, such that for all x satisfying …” That just makes it inherently confusing. Secondly, it does not in any way psychologically resemble the intuitive image it is intended to capture. This is the definition of the limit. When I think of limits I have these beautiful visual images of little points getting closer to something. When I try to identify a limit, I just imagine the thing that they’re getting closer to. That’s the whole story. When I try to get rigorous, I replace this beautiful and simple image with three nested quantifiers. Yuck.

[2] You will notice some interconnections in the sequence of problems. After a few good experiences with this last year and then hearing how much fun everyone had at PCMI, I am beginning to feel like these sequences of densely but subtly interconnected problems are really, really awesome. Constructing them is a deep art and I am a tiny apprentice. But you can get started humbly and still see payoff: it was certainly a cool moment today in class when we went over these problems and a number of folks who had done out Speed problems #1-3 “the long way” realized that they could have applied their answer to Algebra Calisthenics #2 to do these three problems in moments in their heads.

The History of Algebra, part II: Unsophisticated vs. Sophisticated Tools

Math ed bloggers love Star Wars. This post is extremely long, and involves a fair amount of math, so in the hopes of keeping you reading, I promise a Star Wars reference toward the end. Also, you can still get the point if you skip the math, though that would be sad.

The historical research project I gave myself this spring in order to prep my group theory class (which is over now – why am I still at it?) has had me working slowly through two more watershed documents in the history of math:

Disquisitiones Arithmeticae
by Carl Friedrich Gauss
(in particular, “Section VII: Equations Defining Sections of a Circle”)


Mémoire sur les conditions de résolubilité des équations par radicaux
by Evariste Galois

I’m not done with either, but already I’ve been struck with something I wanted to share. Mainly it’s just some cool math, but there’s a pedagogically relevant idea in here too –

Take-home lesson: The first time a problem is solved the solution uses only simple, pre-existing ideas. The arguments and solution methods are ugly and specific. Only later do new, more difficult ideas get applied, which allow the arguments and solution methods to become elegant and general.

The ugliness and specificity of the arguments and solution methods, and the desire to clean them up and generalize them, are thus a natural motivation for the new ideas.

This is just one historical object lesson in why “build the machinery, then apply it” is a pedagogically unnatural order. Professors delight in using the heavy artillery of modern math to give three-sentence proofs of theorems once considered difficult. (I’ve recently taken courses in algebra, topology, and complex analysis, with three different professors, and deep into each course, the professor gleefully showcased the power of the tools we’d developed by tossing off a quick proof of the fundamental theorem of algebra.) Now, this is a very fun thing to do. But if the goal is to make math accessible, then this is not the natural order.

The natural order is to try to answer a question first. Maybe we answer it, maybe we don’t. But the desire for and the development of the new machinery come most naturally from direct, hands-on experience with the limitations of the old machinery. And that means using it to try to answer questions.

I’m not saying anything new here. But I just want to show you a really striking example from Gauss. (Didn’t you always want to see some original Gauss? No? Okay, well…)

* * * * *

I am reading a 1966 translation of the Disquisitiones by Arthur A. Clarke which I have from the library. An original Latin copy is online here. I don’t read Latin but maybe you do.

I’m focusing on the last section in the book, but at one point Gauss makes use of a result he proved much earlier:

Article 42. If the coefficients A, B, C, \dots, N; a, b, c, \dots, n of two functions of the form



are all rational and not all integers, and if the product of P and Q is


then not all the coefficients \mathfrak{A}, \mathfrak{B}, \dots, \mathfrak{Z} can be integers.

Note that even the statement of Gauss’ proposition here would be cleaned up by modern language. Gauss doesn’t even have the word “polynomial.” The word “monic” (i.e., leading coefficient 1) would also have been handy. In modern language he could have said, “The product of two rational monic polynomials is not an integer polynomial if any of their coefficients are not integers.”

But this is not the most dramatic difference between Gauss’ statement (and proof – just give me a sec) and the “modern version.” On page 400 of Michael Artin’s Algebra textbook (which I can’t stop talking about only because it is where I learned like everything I know), we find:

(3.3) Theorem. Gauss’s Lemma: A product of primitive polynomials in \mathbb{Z}[x] is primitive.

The sense in which this lemma is Gauss’s is precisely the sense in which it is really talking about the contents of Article 42 from Disquisitiones which I quoted above.


First of all, what’s \mathbb{Z}[x]? Secondly, what’s a primitive polynomial? Third and most important, what does this have to do with the above? Clearly they both have something to do with multiplying polynomials, but…

Okay. \mathbb{Z}[x] is just the name for the set of polynomials with integer coefficients. (Apologies to those of you who know this already.) So a polynomial in \mathbb{Z}[x] is really just a polynomial with integer coefficients. This notation was developed long after Gauss.

More substantively, a “primitive polynomial” is an integer polynomial whose coefficients have gcd equal to 1. I.e. a polynomial from which you can’t factor out a nontrivial integer factor. E.g. 4x^2+4x+1 is primitive, but 4x^2+4x+2 is not because you can take out a 2. This idea is from after Gauss as well.

So, “Gauss’s Lemma” is saying that if you multiply two polynomials each of whose coefficients do not have a common factor, you will not get a common factor among all the coefficients in the product.

What does this have to do with the result Gauss actually stated?

That’s an exercise for you, if you feel like it. (Me too actually. I feel confident that the result Artin states has Gauss’s actual result as a consequence; less sure of the converse. What do you think?) (Hint, if you want: take Gauss’s monic, rational polynomials and clear fractions by multiplying each by the lcm of the denominators of its coefficients. In this way replace his original polynomials with integer polynomials. Will they be primitive?)

Meanwhile, what I really wanted to show you are the two proofs. Original proof: ugly, long, specific, but containing only elementary ideas. Modern proof: cute, elegant, general, but involving more advanced ideas.

Here is a very close paraphrase of Gauss’ original proof of his original claim. Remember, P and Q are monic polynomials with rational coefficients, not all of which are integers, and the goal is to prove that PQ‘s coefficients are not all integers.

Demonstration. Put all the coefficients of P and Q in lowest terms. At least one coefficient is a noninteger; say without loss of generality that it is in P. (If not, just switch the roles of P and Q.) This coefficient is a fraction with a denominator divisible by some prime, say p. Find the term in P among all the terms in P whose coefficient’s denominator is divisible by the highest power of p. If there is more than one such term, pick the one with the highest degree. Call it Gx^g, and let the highest power of p that divides the denominator of G be p^t. (t \geq 1 since p was chosen to divide the denominator of some coefficient in P at least once.). The key fact about the choice of Gx^g is, in Gauss’s words, that its “denominator involves higher powers of p than the denominators of all fractional coefficients that precede it, and no lower powers than the denominators of all succeeding fractional coefficients.”

Gauss now divides Q by p to guarantee that at least one term in it (at the very least, the leading term) has a fractional coefficient with a denominator divisible by p, so that he can play the same game and choose the term \Gamma x^{\gamma} of Q/p with \Gamma having a denominator divisible by p more times than any preceding fractional coefficient and at least as many times as each subsequent coefficient. Let the highest power of p dividing the denominator of \Gamma be p^{\tau}. (Having divided the whole of Q by p guarantees that \tau \geq 1, just like t.)

I’ll quote Gauss word-for-word for the next step:

“Let those terms in P which precede Gx^g be 'Gx^{g+1}, ''Gx^{g+2}, etc. and those which follow be G'x^{g-1}, G''x^{g-2}, etc.; in like manner the terms which precede \Gamma x^{\gamma} will be '\Gamma x^{\gamma+1}, ''\Gamma x^{\gamma+2}, etc. and the terms which follow will be \Gamma'x^{\gamma-1}, \Gamma''x^{\gamma-2}, etc. It is clear that in the product of P and Q/p the coefficient of the term x^{g+\gamma} will

= G\Gamma + 'G\Gamma' + ''G\Gamma'' + etc.

+ '\Gamma G' + ''\Gamma G'' + etc.

“The term G\Gamma will be a fraction, and if it is expressed in lowest terms, it will involve t+\tau powers of p in the denominator. If any of the other terms is a fraction, lower powers of p will appear in the denominators because each of them will be the product of two factors, one of them involving no more than t powers of p, the other involving fewer than \tau such powers; or one of them involving no more than \tau powers of p, the other involving fewer than t such powers. Thus G\Gamma will be of the form e/(fp^{t+\tau}), the others of the form e'/(f'p^{t+\tau-\delta}) where \delta is positive and e, f, f' are free of the factor p, and the sum will


The numerator is not divisible by p and so there is no reduction that can produce powers of p lower than t+\tau.”

(This is on pp. 25-6 of the Clarke translation.)

This argument guarantees that the coefficient of x^{g+\gamma} in PQ/p, expressed in lowest terms, has a denominator divisible by p^{t+\tau}. Thus the coefficient of the same term in PQ has a denominator divisible by p^{t+\tau-1}. Since t and \tau are each at least 1, this means the denominator of this term is divisible by p at least once, and so a fraction. Q.E.D.

Like I said – nasty, right? But the concepts involved are just fractions and divisibility. Compare a modern proof of “Gauss’ Lemma” (the statement I quoted above from Artin – a product of primitive integer polynomials is primitive).

Proof. Let the polynomials be P and Q. Pick any prime number p, and reduce everything mod p. P and Q are primitive so they each have at least one coefficient not divisible by p. Thus P \not\equiv 0 \mod{p} and Q \not\equiv 0 \mod{p}. By looking at the leading terms of P and Q mod p we see that the product PQ must be nonzero mod p as well. This implies that PQ contains at least one coefficient not divisible by p. Since this argument works for any prime p, it follows that there is no prime dividing every coefficient in PQ, which means that it is primitive. Q.E.D.1

Clean and quick. If you’re familiar with the concepts involved, it’s way easier to follow than Gauss’s original. But, you have to first digest a) the idea of reducing everything mod p; b) the fact that this operation is compatible with all the normal polynomial operations; and c) the crucial fact that because p is prime, the product of two coefficients that are not \equiv 0 \mod{p} will also be nonzero mod p.

Now Gauss actually had access to all of these ideas. In fact it was in the Disquisitiones Arithmeticae itself that the world was introduced to the notation “a \equiv b \mod{p}.” So in a way it’s even more striking that he didn’t think to use them here when they would have cleaned up so much.

What bugged me out and made me excited to share this with you was the realization that these two proofs are essentially the same proof.


I’m not gonna spell it out, because what’s the fun in that? But here’s a hint: that term Gx^g that Gauss singled out in his polynomial P? Think about what would happen to that term (in comparison with all the terms before it) if you a) multiplied the whole polynomial by the lcm of the denominators to clear out all the fractions and yield a primitive integer polynomial, and then b) reduced everything mod p.

(If you are into this sort of thing, I found it to be an awesome exercise, that gave me a much deeper understanding of both proofs, to flesh out the equivalence, so I recommend that.)

* * * * *

What’s the pedagogical big picture here?

I see this as a case study in the value of approaching a problem with unsophisticated tools before learning sophisticated tools for it. To begin with, this historical anecdote seems to indicate that this is the natural flow. I mean, everybody always says Gauss was the greatest mathematician of all time, and even he didn’t think to use reduction mod p on this problem, even though he was developing this tool on the surrounding pages of the very the same book.

In more detail, why is this more pedagogically natural than “build the (sophisticated) machine, then apply it”?

First of all, the machine is inscrutable before it is applied. Think about being handed all the tiny parts of a sophisticated robot, along with assembly instructions, but given no sense of how the whole thing is supposed to function once it’s put together. And then trying to follow the instructions. This is what it’s like to learn sophisticated math ideas machinery-first, application-later. I felt this way this spring in learning the idea of Brouwer degree in my topology class. Now that everything is put together, I have a strong desire to go back to the beginning and do the whole thing again knowing what the end goal is. The ideas felt so airy and insubstantial the first time through. I never felt grounded.

Secondly, the quick solution that is powered by the sophisticated tools loses something if it’s not coupled with some experience working on the same problem with less sophisticated tools. The aesthetic delight that professors take in the short and elegant solution of the erstwhile-difficult problem comes from an intimacy with this difficulty that the student skips if she just learns the power tools and then zaps it. Likewise, if the goal is to gain insight into the problem, the short, turbo-powered solution often feels very illuminating to someone (say, the professor) who knows the long, ugly solution, but like pure magic, and therefore not illuminating at all, to someone (say, a student) who doesn’t know any other way. There is something tenuous and spindly about knowing a high-powered solution only.

Here I can cite my own experience with Gauss’s Lemma, the subject of this post. I remember reading the proof in Artin a year ago and being satisfied at the time, but I also remember being unable to recall this proof (even though it’s so simple! maybe because it’s so simple!) several months later. You read it, it works, it’s like poof! done! It’s almost like a sharp thin needle that passes right through your brain without leaving any effect. (Eeew… sorry that was gross.) The process of working through Gauss’ original proof, and then working through how the proofs are so closely related, has made my understanding of Artin’s proof far deeper and my appreciation of its elegance far stronger. Before, all I saw was a cute short argument that made something true. I now see in it the mess that it is elegantly cleaning up.

I’ve had a different form of the same experience as I fight my way through Galois’ paper. (I am working through the translation found in Appendix I of Harold Edwards’ 1984 book Galois Theory. This is a great way to do it because if at any point you are totally lost about what Galois means, you can usually dig through the book and find out what Edwards thinks he means.) I previously learned a modern treatment of Galois theory (essentially the one found in Nathan Jacobson’s Basic Algebra I – what a ridiculous title from the point of view of a high school teacher!). When I learned it, I “followed” everything but I knew my understanding was not where I wanted it to be. Here the words “spindly” and “tenuous” come to mind again. The arguments were built one on top of another till I was looking at a tall machine with a lot of firepower at the very top but supported by a series of moving parts I didn’t have a lot of faith in.

An easy mark for Ewoks, and I knew it.

This version of Galois theory was all based on concepts like fields, automorphisms, vector spaces, separable and normal extensions, of which Galois himself had access to none. The process of fighting through Galois’ original development of his theory and trying to understand how it is related to what I learned before has been slowly filling out and reinforcing the lower regions of this structure for me. Coupling the sophisticated with the less sophisticated approach has given the entire edifice some solidity.

Thirdly, and this is what I feel like I hear folks (Shawn Cornally, Dan Meyer, Alison Blank, etc.) talk about a lot, but it bears repeating, is this:

If you attack a problem with the tools you have, and either you can’t solve it, or you can solve it but your solution is messy and ugly, like Gauss’s solution above (if I may), then you have a reason to want better tools. Furthermore, the way in which your tools are failing you, or in which they are being inefficient, may be a hint to you for how the better tools need to look.

Just as an example, think about how awesome reduction mod p is going to seem if you are already fighting (as Gauss did) with a whole bunch of adding stuff up some of which is divisible by p and some of which is not. What if you could treat everything divisible by p as zero and then summarily forget about it? How convenient would that be?

I want to bring this back to the K-12 level so let me give one other illustration. A major goal of 7th-9th grade math education in NY (and elsewhere) is getting kids to be able to solve all manner of single-variable linear equations. The basic tool here is “doing the same thing to both sides.” (As in, dividing both sides of the equation by 2, or subtracting 2x from both sides…) For the kids this is a powerful and sophisticated tool, one that takes a lot of work to fully understand, because it involves the extremely deep idea that you can change an equation without changing the information it is giving you.

There is no reason to bring out this tool in order to have the kiddies solve x+7=10. It’s even unnatural for solving 4x-21=55. Both of these problems are susceptible to much less abstract methods, such as “working backwards.” The “both sides” tool is not naturally motivated until the variable appears on both sides of the equation. I used to let students solve 4x-21=55 whatever way made sense to them, but then try to impose on them the understanding that what they had “really done” was to add 21 to both sides and then divide both sides by 4, so that later when I gave them equations with variables on both sides, they’d be ready. This was weak because I was working against the natural pedagogical flow. They didn’t need the new tool yet because I hadn’t given them problems that brought them to the limitations of the old tool. Instead, I just tried to force them to reimagine what they’d already been doing in a way that felt unnatural to them. Please, if a student answers your question and can give you any mathematically sound reason, no matter how elementary, accept it! If you would like them to do something fancier, try to give them a problem that forces them to.

Basically this whole post adds up to an excuse to show you some cool historical math and a plea for due respect to be given to unsophisticated solutions. There is no rush to the big fancy general tools (except the rush imposed by our various dark overlords). They are learned better, and appreciated better, if students, teachers, mathematicians first get to try out the tools we already have on the problems the fancy tools will eventually help us answer. It worked for Gauss.

[1] This is the substance of the proof given in Artin but I actually edited it a bit to make it (hopefully) more accessible. Artin talks about the ring homomorphism \mathbb{Z}[x] \longrightarrow \mathbb{F}_p[x] and the images of P and Q (he calls them f and g) under this homomorphism.

ADDENDUM 8/10/11:

I recently bumped into a beautiful quote from Hermann Weyl that I had read before (in Bob and Ellen Kaplan’s Out of the Labyrinth, p. 157) and forgotten. It is entirely germane.

Beautiful general concepts do not drop out of the sky. To begin with, there are definite, concrete problems, with all their undivided complexity, and these must be conquered by individuals relying on brute force. Only then come the axiomatizers and systematizers and conclude that instead of straining to break in the door and bloodying one’s hands one should have first constructed a magic key of such and such a shape and then the door would have opened quietly, as if by itself. But they can construct the key only because the successful breakthrough enables them to study the lock front and back, from the outside and from the inside. Before we can generalize, formalize and axiomatize there must be mathematical substance.

The History of Algebra, part I: Negative Numbers

This is the post I promised over a month ago on two landmark books in the history of algebra:

Kitab al-Jabr wa-l-Muqabala, aka The Compendium on Calculating by Completion and Reduction
by Muhammad ibn Musa al-Khwarizmi


Ars Magna, aka The Great Art, or The Rules of Algebra
by Girolamo Cardano

A lot can be and has been said about these books. I’m going to zero in on one particular story they tell:

Take-home lesson #1: the mathematical world’s understanding of negative numbers came incredibly slowly, in very gradual stages. We tend to treat learning about negatives like there’s just one big idea to understand. Really, there are like twenty.

Reading these books has given me more respect than ever for the depth of the process we ask kids to go through between sixth and ninth grade as they get comfortable working with negatives.

Take-home lesson #2: In the process of understanding a new and difficult idea, the ability to understand and use the idea to answer a question comes way before the ability to pose a question about the idea. So, it makes sense to get very comfortable with -2 as the answer to 5-7 before ever asking yourself to add -2 to something.

Take-home lesson #3: The development of algebra is an important motivator, historically anyway, for the development of negatives.

Pedagogical idea: How can we use this historical motivation to develop negatives with students?
a) Al-Khwarizmi’s book contains a very limited idea of negativeness: that which has been subtracted. But since he is thinking about how to multiply, for example, an unknown with 2 subtracted, from the same unknown with 3 subtracted, he needs to see that, once everything has been distributed, the product of the subtracted 2 and the subtracted 3 contribute an added 6 to the total. It is not immediately obvious how this becomes a classroom activity but I think it definitely can. 20*20 is 400; how does taking away 2 from one of the factors and 3 from the other affect the product? We get kids thinking hard about this and it would support the most contrivance-free explanation for why (neg)(neg)=(pos) that I have ever seen.
b) Allowing the coefficients of equations to be negative significantly cleaned up the theory of equations. Our students know more about negatives than the inventors of algebra did. It might be really exciting and powerful, increasing their appreciation for both negatives and quadratics, to show or let them develop the original (negative-impaired) theory of quadratics, and then have them use negatives to clean it up.

* * * * *

The first of these books was written in Arabic, and published around 820 in Baghdad. 820. Just to make sure you didn’t miss that. The translation I read is 180 years old. The full text of it is available online.

What I am even more anxious to make sure you didn’t miss are certain bits of the author’s name and the book title. The author is Muhammad Ibn Musa Al-Khwarizmi. (Muhammad, son of Musa, from Khwarizm.) He is often referred to just as Al-Khwarizmi. This is the origin of the English word “algorithm.”

And as if that weren’t awesome enough, the “completion” in the title is the Arabic word “al-jabr.” This is the origin of the English word “algebra.”

The second book was written in Latin and published in 1545 in Renaissance Italy. I read a 1968 translation by T. Richard Witmer. I can’t find it online, but in case you read Latin, here is a pdf of the original. Its distinction historically is that it was the first publication of a general method for solving what we would now call cubic and quartic equations. (Cardano attributes the solution for one class of cubics to Niccolo Tartaglia and Scipione del Ferro, the generalization of this solution to other classes of cubics to himself, and the solution of quartics to his student Lodovico Ferrari.)

Both these books have been widely written about. I was reading them in the hopes of learning how these mathematical breakthroughs were understood in their own day. My original intent was to use this information to help me design the group theory course I am teaching. We are getting into the Galois theory of equations. The modern treatment of this subject, which is what I learned, doesn’t feel to me like it could serve as the basis of a natural and meaningful development for people who don’t already know it. (The way I learned it, which is how almost everybody learns it in this day and age, was the opposite of “natural” or “meaningful.” Very cool, but only in an after-the-fact sort of a way. Like you came to the show at the very end and only saw the climactic scene, and everybody in the audience gasped and shrieked, except you because you didn’t care about any of the people in the show because you just walked in one second ago. And then after it was over your friend explained to you what had been going on and you understood why the other audience members cared, and were kind of mad you hadn’t gotten to watch the rest of the play first. If that analogy made any sense.) My idea was, let me study how the theory of polynomial equations developed over time; then I’ll be able to put the class in the place of the developers of the theory, and so the insights will come about naturally, and make sense, and be compelling, against the backdrop of the questions they were designed to answer. Learning the historical context would be pedagogically fertile.

As it happened, I overshot the historical mark a bit – for the purposes of the class, the mindframes of these two books are unnecessarily archaic. There is real pedagogical fertility here, but it’s around ideas that the participants in my class (who are teachers and mathematicians) already understand.

On the other hand, I’ve taught plenty of students who don’t understand them. In particular, I found myself surprised and intrigued by what each book did and did not say about negative numbers. I felt like I was watching this idea (the negative) coalesce and congeal, roughly and haltingly, over time. Like a churning mixture of hard crystal clarity and murky goo. If I may.

Though separated by 700 years, both books find it necessary to give three different quadratic formulas. Because, you see, you need a different method to solve

x^2 + 10x = 20

than to solve

x^2 = 10x + 20


x^2 + 20 = 10x.
(Actually, this notation is anachronistic. Neither author uses anything resembling modern notation. Muhammad Ibn Musa writes everything in prose. For the first of these equations, for instance, he would write, “A square and ten roots equal 20 dirhems.” Dirhems are an Arabic unit of currency.)

We think of there being only one quadratic formula because we are comfortable moving everything to the left; the only difference a modern reader can see between these equations is a difference in the signs of the coefficients:

x^2 + 10x - 20 = 0

x^2 - 10x + 20 = 0

etc. And all the equations can be solved exactly the same way. But for neither of these authors had the idea of negativeness grown adequately supple to make this possible.

Since Cardano presents the full algebraic solution to cubic equations, the situation is even more extreme in Ars Magna. Each of the following gets its own chapter:
“On the cube and first power equal to the number”
“On the cube equal to the first power and number”

“On the cube, first power, and number equal to the square”
“On the cube, square, and number equal to the first power”
These are the first two and last two in a sequence of 13 chapters. This is over 20% of the book. Not only does each equation type get its own method of solution, each method gets its own (geometric) proof.

Histories of mathematics often mention the situation I’ve described here. For example, Mactutor’s history of quadratic, cubic and quartic equations says something like “the different types arise because Al-Khwarizmi had no zero or negatives.” This is the story I’d gotten before I picked up the originals, and what I found out is that it’s not true.

Both books calculate comfortably with something translated as “negative numbers”. Ars Magna goes so far as to contain a calculation with imaginaries. But the scope of the idea of negativeness is limited, in a different way, in each book. And I think I learned something important about how people come to understand negative numbers by taking note of these limitations.

In Muhammad Ibn Musa’s work, a “negative” is a number that’s been subtracted from another number. That’s it; that’s all it is. But this is enough to justify all the rules of arithmetic with negatives that we teach middle schoolers, because Muhammad makes use of all of them:

If there are greater numbers combined with units to be added to or subtracted from them, then four multiplications are necessary; namely, the greater numbers by the greater numbers, the greater numbers by the units, the units by the greater numbers, and the units by the units.

He is talking about FOIL in case that wasn’t clear.

If the units, combined with the greater numbers, are positive, then the last multiplication is positive; if they are both negative, then the fourth multiplication is likewise positive. But if one of them is positive, and one negative, then the fourth multiplication is negative.

This is on pp. 21-22. Elsewhere, he fluently adds and subtracts these “negative” (i.e. subtracted) quantities. For example, on p. 27,

The root of two hundred, minus ten, subtracted from twenty minus the root of two hundred, is thirty minus twice the root of two hundred; twice the root of two hundred is the root of eight hundred.

In other words,

20 - \sqrt{200} - \left(\sqrt{200}-10\right) = 30 - 2\sqrt{200}

My point is that Ibn Musa’s use of the idea of negativeness is so limited in scope that the word “negative” might even be sort of a mistranslation to a modern reader; however, this limited-scope idea fully supports all the rules of arithmetic we teach.

Cardano’s understanding of negativeness is much broader. For example, in the first chapter of the book, he explicitly discusses the possibility that a negative number might satisfy an equation. But throughout, his dealings with negatives are marked by a kind of choppiness, an inconsistency. Firstly, he refers to negative solutions to equations as “false” or “fictitious” (as opposed to “true”). Then, once he gets into the nitty gritty of solving equations, he pretty much stops mentioning them entirely. For example in chapter 8 he says “it is evident that when the middle power is equal to the highest power and the constant, there are necessarily two solutions…” We would say there are three (1 negative), and Cardano would have acknowledged this third solution in chapter 1.

What Cardano virtually never does with negatives (the one exception is below) is treat them like they can be coefficients. Solutions, but not coefficients: i.e. a negative number can be the answer to a question I asked but they can’t be the language in which the question is posed. Most of the time, the idea of working with negative coefficients appears simply to not occur to him. On one occasion, the spectre is invoked only to be dismissed (for reasons that are opaque to me). Cardano is discussing positive and negative solutions to equations in which a power equals a certain number. (I.e. solvable by the simple extraction of one root.)

It is always presumed in this case, of course, that the number to which the power is equated is true and not fictitious. To doubt this would be as silly as to doubt the fundamental rule itself for, though opposite reasoning must be observed in opposite cases, the reasoning is still the same. p. 11


The point that I am making is that if Cardano is any example, negatives are much easier to get your head around as an answer than as part of the question. Allowing coefficients to be negative would have caused a massive increase in the efficiency of the theory: as noted above, Cardano gave separate solutions for thirteen forms of cubic equations. With negative coefficients, these thirteen cases are reduced to 2: quadratic term is zero vs. nonzero. I don’t know when this cleaning-up of the theory actually historically took place. Avital Oliver, whom I mentioned in my last post, told me that noticing how much negative coefficients would simplify the theory of equations was a major reason, historically, that negative numbers gained acceptance as numbers. That makes sense to me.

The one moment in the book where the idea of a negative number is entertained as part of the statement of a problem is in the absolutely fascinating chapter 37, On the Rule for Postulating a Negative:

This rule is threefold, for one either assumes a negative, or seeks a negative square root, or seeks what is not. p. 217

Cardano is being highly speculative here. He seems to think maybe the entire chapter he’s writing is crazy talk. He begins by considering equations with negative solutions. Even though he already spent chapter 1 talking about negative solutions, he feels the need to justify them here. He notes that

x^2 = 4x + 32


x^2 = x + 20

don’t appear to have a common solution, since 8 solves the first while 5 solves the second. However, the “turned-around” equations

x^2 + 4x =32


x^2 + x = 20

do have a common solution, namely 4. In chapter 1, Cardano asserted that a quadratic and its “turnaround” have opposite solutions: a “true” (positive) solution for one is a “fictitious” (negative) solution, equal in magnitude, for the other. So here, the original pair of equations have a common solution after all: -4. Cardano seems to feel (and I kind of relate) that the presence of the common positive solution between the turned-around equations and the formal relationship between the turned-around pair and the original pair means there ought to be a common solution for the original pair; the fact that this common solution turns out to exist if you allow negative solutions is then a reason to believe in negative solutions.

Anyway, he follows with two problems about the property of a man named Francis. The problems are totally contrived but they lead to negative solutions for Francis’ property, which he interprets as meaning that Francis has debt. Tellingly, though, he sets up the equations letting -x be Francis’ property, so that the equations he actually solves have positive solutions.

Then, he poses a problem that has no positive or negative solution: divide 10 into two parts whose product is 40. He follows the procedure he uses on comparable problems with real solutions (e.g. divide 10 into two parts whose product is 21): “… it is clear that this case is impossible. Nevertheless, we will work thus:…” (p. 219). The procedure forces him to subtract 40 from 25 and then take the square root of this. He already seems dubious about the subtraction 25-40:

The square root of the remainder, then - if anything remains - added to or subtracted from [five] shows the parts. But since such a remainder is negative, you will have to imagine \sqrt{-15}. p. 219

Note the “if anything remains.” So this “square root of a negative” business is a bunch of new hooey built on something that might be hooey to begin with. In that context it almost feels like what we’d now call imaginaries (and what Cardano calls “the sophistic negative”) are only a comparatively small speculative step beyond the craziness of negative numbers in the first place. The whole chapter has this I-know-this-is-complete-madness-but-I’m-just-gonna-do-it tone. A famous passage:

... you will have that which you seek, namely 5 + \sqrt{25-40} and 5 - \sqrt{25-40}, or 5 + \sqrt{-15} and 5 - \sqrt{-15}. Putting aside the mental tortures involved, multiply 5 + \sqrt{-15} and 5 - \sqrt{-15}, making 25 - (-15) which is +15. Hence this product is 40... So progresses arithmetic subtlety the end of which, as is said, is as refined as it is useless. p. 219-220

(As above, the notation here is anachronistic; but the translation I read modernized all Cardano’s notation for ease of reading.)

It is in this wildly speculative chapter that Cardano – for the only time in the book – suggests a problem posed in terms of negatives:

... If it be said, Divide 6 into two parts the product of which is 40, the problem is one of the sophistic negative... But if it is said, Divide 6 into two parts the product of which is -40, or divide -6 into two parts producing -40, in either case the problem will be one of the pure negative... and the parts will be those that have been given [10 and -4, or -10 and 4]. If it be said, Divide -6 into two parts the product of which is +24, the problem will be one of the sophistic negative. pp. 220-221

What am I getting at with all this? Well I can’t tell you what to think but I am left with a completely new sense of the natural contours of learning about negatives.

I taught Algebra I for a long time. My students entered the class having trouble both conceptually and computationally with negative numbers. I did my duty and explained their meaning and operation, along with lots of practice for the kiddies, early in the year. Having always been concerned with understanding, I looked for models of negatives that would support all the operations I wanted kids doing. I wanted the model to instantiate as much of the mathematical structure as possible. The school I taught at had a woodshop program, and I got them to build me a board with a flat surface with holes cut in it and wooden pucks to fill the holes, so that I could physically model 1 + -1 = 0 and people would physically see how a hole combined with a wooden puck to make a flat surface. Subtraction of negatives would become removing holes, and this clearly required adding pucks to the surface; thus subtraction of negatives is adding positives. The model required another layer of contrivance to support multiplication: I had to ask students to imagine standing upside down, on the other side of the surface, so the holes became pucks and the pucks could be imagined as holes; then 3*-4 could be 3 people with the normal point of view, each standing by 4 holes, while -3*4 was 3 upside down people each standing by what appeared to them as 4 pucks.

It didn’t work as the centerpiece of teaching about positives and negatives. The multiplication problems make the contrivance really obvious, but actually there’s a certain amount of contrivance even in how it models addition. If I combine some pucks and some holes, who says that the pucks need to fall into the holes? I made kids draw tons of pictures of the whole thing, which completely wore them out, and I don’t know how much it added to their understanding. Meanwhile, the model, as all models do, made problems bigger, clunkier. Subtracting -5 from -7 was no thing: just fill 5 holes. But subtracting (-5) from 1 was like a whole production. The kids either needed to create 5 holes by removing pucks from them (and retaining the pucks – why would you do either thing?) before adding 5 new ones to fill the holes, or they needed to make the intensely abstract and not-adequately-justified leap that because subtracting -5 amounted to adding +5 when you were subtracting from a negative, the same thing should be true when subtracting from a positive. Retrospectively the fact that I asked my kids to make this leap of faith and told myself that I was actually helping them understand how math makes sense is kind of embarrassing.

But the thing is, as models go, I’ll stand behind this one as one of the better ones. I’ve seen cuter models for multiplication, e.g. on the wall of the classroom of my first former student to become a math teacher (yes I am now old enough for that to happen):
Do you LOVE to LOVE? You’re a LOVER.
Do you LOVE to HATE? You’re a HATER.
Do you HATE to LOVE? You’re a HATER.
Do you HATE to HATE? You’re a LOVER.
But none of these cuter models supports addition or subtraction as well, and sometimes it’s hard to see that they are even related to multiplication. Meanwhile, the only model I’ve ever seen, besides mine, that supports all four operations is the IMP curriculum‘s “hot and cold cubes.” And if you see the contrivance and unnaturalness in what I described above, “hot and cold cubes” is another level. Again, I think it’s kind of a brilliant model. But if you’ve ever tried to use it with low-skilled kids, you know how much production is involved in even getting them to imagine and buy into the scenario in the first place, let alone use all that machinery to solve problems.

It’s been a few years now that it’s seemed clear to me that the whole idea of teaching negatives through a particular model is not the way to go. People who use negatives effectively have gotten them down to a very slim abstract notion that supports all their operations and all their uses as representations of real things. (I would describe my own understanding with words like “opposite directionyness” – don’t laugh.) Teaching has to be aimed at this slim, efficient understanding as an end product. Forcing kids to engage with a whole clunky megilla of story and visual image every time they want to do a computation with negatives can’t possibly be the right path.

In more recent years I’ve found much more effective ways to teach negatives. I’ve been beginning by brainstorming with my students what negative numbers are actually, in the real world, used to represent. Not just debt, temperature and elevation. These aren’t enough. They capture the “below zeroiness” but not the “opposite directionyness,” since the positive direction is so fixed in each case. Also needed are examples of net change: gain or loss of money by a business; football yardage; etc. Furthermore, examples where negatives are used to specify direction in space or time: say uptown is positive; what would negative mean? What if east were positive? What if downtown were positive? If positive 3 means the space shuttle took off 3 seconds ago, what would -3 mean?

Using this conversation as groundwork has brought me much more success than the wooden board did, but there’s still something missing. It’s hard to find convincing examples familiar to kids that support multiplication, for one thing, except for the private tutoring student whose father was a stockbroker, because then short-selling a stock that goes down in price is (neg)(neg) = pos. But it’s more fundamental than that. I’ve still been starting from the question “what is a negative?” when the student’s only legitimate reason to believe negatives even exist is that school says so and her only legitimate reason to care is that she’ll be accountable for an answer.

This question puts the cart before the horse. A corollary of that amazing conversation with Avital Oliver I described last time is that when I teach a new idea I want to cause it to be needed, or at least cause its presence to be felt, cause students to become aware of it in the room with them, before it is ever named. So “what is a negative?” is not ultimately my desired opener for teaching about negatives.

What I’m left with after reading Cardano and Muhammad Ibn Musa is the beginning of an idea, modeled on the history of the concept itself, for what could take its place. So, here’s a curriculum brainstorm. It spans a lot of years and doesn’t fit in with anybody’s state frameworks, so I hope you’ll forgive the impracticality. I’m just fantasizing.

First, laying the groundwork (inspired by Ibn Musa): When you do arithmetic, how does subtracting something from the numbers affect the answer? How does 20 + 10 change if I subtract 3 from the 10? (To focus attention on the key point, what does the subtracted 3 do to the answer?) How does 20 – 10 change? How does 20*10 change? How does 20*10 change if I subtract 4 from the 20? How about if I subtract 4 from the 20 and 3 from the 10? What if I add 4 to the 20 and subtract 3 from the 10? The point is to engage the students in sorting out all these questions. (Why would they care about these questions? That’s a whole other thing but I don’t think a very hard one, and it will depend on the group of students – but I’m sure given any set of folks we can find a context to make these questions compelling.) Note that there is no “new kind of number” here. Some 3’s are subtracted, some added, that’s all. We very gently call their attention to the “subtracted 3” as an object worth talking about, but they already know what we mean; there’s nothing new to learn. I think this sorting-out is going to attune students’ antennae to the frequency in the universe on which negative numbers live.

Much later, once negatives come into play, stay respectful of the fact that they make sense as answers more easily than they make sense as questions. What number could you add to 7 and get 4? (No number! Even if you add nothing, it’s still 7.) If you could add something, what would that thing be like? In other words, bring forth the idea of negativeness as the answer to questions. (Perhaps your earlier “subtracted 3” will be what they come up with; perhaps not.) Do a lot and a lot and a lot of this, before ever asking anybody anything about negatives.

Later still, it will be time to develop equation solving intently. The way we do this in Algebra now, we build in the necessity for the methods to generalize to negative coefficients. Instead, start it earlier and use Muhammad Ibn Musa-typed problems. Let them develop techniques that feel most natural to them. (From lots of classroom experience, I can tell you that these will not be methods that generalize to negative coefficients.) Allow problems with negative solutions to creep in, but not negative coefficients. Negative numbers and their operations are becoming familiar, but still let the students do what’s comfortable in the realm of equation solving. Increase the sophistication of the equations; develop the solution of one of the three forms of the quadratic (what number can I multiply by itself, and then add 6 of itself, to get 40?). Pose problems in the other forms as well though. Finally, as a last act, lead them to the fact that allowing coefficients to be negative unifies all three cases of the quadratic into one and they can use a single method on all problems. How useful! Negatives are now official.

I would really love to do this with an out-of-school math circle of youngish kids or mathphobic adults. I need to get on that.

* * * * *

Two tidbits from these books that didn’t fit in with the main lines of thought above. There’s lots more where these came from but as usual I’ve already OD-ed so I have to draw the line somewhere.

a) Muhammad Ibn Musa gives a beautiful, though not rigorous, justification for the circle area formula that I’ve never seen before. He expresses the circle’s area as half its circumference times half its diameter. He explains that this is true because any regular polygon has an area equal to half its circumference times half the diameter of the inscribed circle. (Draw lines from the center to every vertex, and think about the areas of the triangles you get, to see that this is true.)

b) Cardano says something really darling about the solution of the cubic, that I just found delightful and have to share:

In our own days Scipione del Ferro of Bologna has solved the case of the cube and first power equal to a constant, a very elegant and admirable accomplishment. Since this art surpasses all human subtlety and the perspicuity of mortal talent and is a truly celestial gift and a very clear test of the capacity of men's minds, whoever applies himself to it will believe that there is nothing that he cannot understand. p. 8