As you may recall, I’m teaching analysis to this class of teachers, developing the –
limit. Two weeks ago I bewildered everybody. Last week and this week, I set out to bewilder everyone even further.
Let me say what I’m going for here. The –
limit is a notoriously difficult definition.1 How to scaffold my class to handle this difficulty? I am banking on the following strategy: make them need the definition. Make them unsatisfied with anything less. Continue poking holes in their current understanding, continue showing them inconsistencies between what they believe and the language they have to describe it, till they have no choice but to try to build something new. Then, let them try to build it. If they build the very thing I’m going for, rejoice. If they build something equally precise and powerful, rejoice. If they cannot build either (the most likely outcome, since the “right answer” took the world mathematical community 150 years to come up with), then it will still make powerful sense to them because it satisfactorily answers a question they were already engaged in trying to answer. That’s the plan anyway.
I will leave you with the two problem sets from the last class, and the readings and presentation from this one. I am very proud of the presentation. After that, I’ll write down one new thought for where to take this.
We engaged people’s attempts to define infinite decimals from the previous class, then abruptly shifted topics:
I let them work long enough so everyone got to do the first section of problems. My goals were:
1) Make participants recognize that they believe the speed of a moving object is something that exists in a particular moment of time.
2) Make them recognize that their naive definition of speed (distance / time) doesn’t actually handle this case.
3) Realize that we thus have a similar definitional problem as with repeating decimals.
We got this far. Then, with just 7 or so minutes left, I gave them another problem set:2
This problem set was designed to get somebody who has never studied calculus basically to take a simple derivative, to bring them into the conversation, and to refresh everyone else’s memory about the basic idea of derivatives. The last problem was on there just so that the calculus folks had a challenge available if they wanted it. Anyway, I had people finish the “Algebra Calisthenics” and “Speed” sections for homework.
This class, we began by engaging this homework, getting a feel for the standard calculus computation in which you identify the speed of an object in a moment as the value toward which average speeds seem to be headed as you look at smaller and smaller intervals. Then we began to press on what this really means.
I handed out a xerox of the scholium from the end of the first section of Book 1 of Newton’s Principia. (The last page of this pdf.) This is where Newton tries to explain what the hell he’s even talking about. I directed their attention to this telling sentence:
An in like manner, by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities, not before they vanish, nor afterwards, but with which they vanish.
Then, I showed them the following presentation. Wanting to share this with you is the real reason for this blog post. I had a lot of fun making it.
What’sCalculusReallyDoing (as pdf)
What’sCalculusReallyDoing (as powerpoint)
Then I passed out a choice excerpt from the awesome criticism of early calculus by Bishop George Berkeley. (Specif, section XIV.)
I asked for the connection between the definitional problem we have here and the definitional problem we had 2 classes ago regarding infinite decimals. (“They both involve getting closer and closer to something but never getting there.”) Then I asked them to try to come up with definitions to address these problems.
This is such a non-sequitur but here’s my one additional thought. I’ve been thinking about how to push participants to recognize a definition as unsatisfying. Tonight, reading Judith Grabiner’s 1983 essay in the AMM about Cauchy and the origins of the –
limit (here it is as a pdf), I had an idea that is totally new to me. Retrospectively I think it’s sort of obvious, but I totally never thought of it before:
To get people to recognize that a definition is mathematically inadequate, have them try to use the definition, for example to prove something! In my case, all of them think that 1/3 = 0.333… Great. So, if we have a candidate definition of the meaning of limits or convergence, can we use it to prove 1/3 = 0.333…? If not, maybe we need a better definition.
(I had this idea when I read Grabiner’s statement that thought Cauchy gave the definition of the limit purely verbally and a bit vaguely, he translated it into the more rigorous language of inequalities when he actually started using it to prove theorems.)
[1] This is for at least 2 distinct (though related reasons): first of all, it’s got three nested quantifiers. “For all , there exists a
, such that for all
satisfying …” That just makes it inherently confusing. Secondly, it does not in any way psychologically resemble the intuitive image it is intended to capture. This is the definition of the limit. When I think of limits I have these beautiful visual images of little points getting closer to something. When I try to identify a limit, I just imagine the thing that they’re getting closer to. That’s the whole story. When I try to get rigorous, I replace this beautiful and simple image with three nested quantifiers. Yuck.
[2] You will notice some interconnections in the sequence of problems. After a few good experiences with this last year and then hearing how much fun everyone had at PCMI, I am beginning to feel like these sequences of densely but subtly interconnected problems are really, really awesome. Constructing them is a deep art and I am a tiny apprentice. But you can get started humbly and still see payoff: it was certainly a cool moment today in class when we went over these problems and a number of folks who had done out Speed problems #1-3 “the long way” realized that they could have applied their answer to Algebra Calisthenics #2 to do these three problems in moments in their heads.
I’m so inspired by you Ben. Can you take it if I tell you, just freaking great work? You rock.
Love it.
This whole thing is awesome but especially the presentation.
If you haven’t, check out _Everything and More_ by David Foster Wallace. He made me finally understand what the hell a limit is. This post reminded me of it because I remember being knocked on my butt when I learned we didn’t get the epsilon-delta definition until like 1846 or something.
Wrong limits. Lie about where the harmonic converges. Or find something better (I have, just nothing at hand). Something whose actual limit is just below… (pi, or 10, or e^2).
Let delta and epsilon disprove something. “Can you get within one of your claimed limit?” “Yes [example]” “Can you get withing one tenth?” “Yes, [example]” Can you get within one one hundredth?” “Yes, [pause] um, hang on, well, no.”
Couldn’t that be a little memorable?
>Lie about …
I need to learn constructive lying! ;^) I am convinced that my obsession with being truthful gets in the way of me becoming a better math teacher. I’m managing to mislead them once in a great while, but I still can’t flat out lie without my facial expression betraying me (the twinkle in my eye alerts those paying close attention).
@ Jesse – Thank you! Yes I can take it 😉
@ Kate – Had no idea DFW wrote a book about math! For me the book that blew my mind by explaining that the epsilon-delta definition was a mid-19th-century innovation was The Calculus Gallery by William Dunham. Which is an awesome book as well.
@ jd2718 – Awesome ideas, thanks!
@ Sue – Yeah, jd2178 is so impressive with his overt misdirection. I admire and aspire to that level of f*ck-with-them-ness. I have trouble flat out lying with a straight face as well. But I get a lot of mileage out of other, less radical ways of keeping the onus of figuring-out on the students. For example, a poker face; expressing doubt about right answers; etc.
Ben, I wonder what you might thing of an nonstandard approach — and how an axiomatic IST approach might compare to defining infinitesimals via ultra-filters of the like (though surely the latter would be far beyond the sort of class you’re teaching).
I am not suggesting that such an approach is feasible–either in principle, or given your constraints. But I would love to hear any musings you might have had.
Okay, musings:
1) I don’t know enough about nonstandard analysis to teach it, but it’s on my to-do list to learn.
2) When in the past I’ve taught calculus, I’ve found it very helpful to acknowledge the usefulness of an informal idea of infinitesimals. A sort of shortcut that makes solving problems more intuitive. I actually think I got a lot of mileage out of having assignments that said things like “explain what a definite integral _really_ is; now explain the ‘fake but useful’ way of looking at what it is.”
(From a student’s paper in 2005: “A useful but untrue way of thinking of the meaning of the definite integral is to think of it as the sum of a set of infinitely thin rectangles – one “rectangle” for every possible point on the graph. Strictly speaking, this is impossible…”)
3) My understanding of nonstandard analysis is that the idea is to rescue infinitesimals from non-rigor limbo. This is awesome, but doesn’t strike me as pedagogically useful until a person has kind of trod the path that mathematics took to get there: first, you have the insight that if you imagine infinitely small numbers, or numbers at the “end” of infinite processes, then you can use this to solve exciting problems. Then you realize you’re not exactly sure what you mean when you identify these numbers. Then you develop epsilon and delta to make precise what you mean, and you are kind of bothered by the fact that it’s so easy and intuitive to work with infinitesimals but you have to resort to clunky epsilons and deltas when you try to prove things. Finally you bring all this more powerful machinery to bear to rescue that intuitive concept from its more naive but nonrigorous origins.
Since what little I know about nonstandard analysis all involves some pretty heavy lifting to make infinitesimals rigorous (e.g. ultrafilters! goodness sakes!), the oft-stated claim that it offers a way to put calculus’ original intuitive ideas on a rigorous basis strikes me as kind of hogwash. Yes, _infinitesimals_ are intuitive, but developing them as an axiom system preserving the first-order properties of reals is not, and constructing them as sequences of reals subjected to an equivalence relation and an order relation based on ultrafilters is definitely not. The hard-to-swallow machinery of epsilon and delta is being replaced with even-harder-to-swallow machinery. This development was exciting to mathematicians only because they’d already had 3 centuries of working with infinitesimals and loving them but knowing that they didn’t really make sense. All that work was worth it to rescue their friends. But this isn’t exciting to a student who isn’t a) good friends with infinitesimals, and b) sort of alienated by how distant epsilon and delta feel from them.
Take this with the grain of salt that I don’t really know the theory. To show you what I don’t know – can you explain to me why all the fuss with ultrafilters is necessary? Why doesn’t the following construction work?
Let *R be the field R(x) of rational functions over the reals. x will be a “positive infinitesimal.” Impose an order structure starting with the polynomials, based on thinking of x as a positive infinitesimal: the order will be lexicographic starting with the constant term. E.g. -5-x<5<5+x<-x<0<x<5-x<5. To compare any rational functions, put them over a common denominator that is positive on the polynomial order, and then compare the numerators. E.g. compare 5 to 1/x: 5x/x vs. 1/x and because 1>5x, this means 1/x>5.
Does the order fail to be well-defined or something?
At any rate, even this simple construction, in addition to probably being wrong in some way since I didn’t say anything about first-order properties or ultrafilters, _still_ strikes me as no less of a pill to swallow than epsilon-delta. The upshot is that I don’t see that the work to understand these alternative constructions will feel worth it until you already know and love infinitesimals, and know and can use epsilon-delta, and feel saddened by the former’s absence from the latter.
4) The ultimate goal of the current course is the Fundamental Theorem of Algebra. The main result out of real analysis we need for this is the Intermediate Value Theorem. To prove this theorem, we need a precise definition of continuity, and the completeness axiom. Since it’s not a course on calculus, I don’t need them to develop intuitive ideas to help them do calculus. What I am going for for them is a deeper understanding of the real line. So my purposes are really much more served by a conversation that stays internal to the real numbers.
Hope this answers the question!