My work on the AMS Teaching & Learning Blog

I don’t know why I didn’t think to tell you this earlier, but: in 2019 I joined the editorial board of the American Mathematical Society’s Teaching & Learning Blog, and I’ve written several pieces for it. I’m extremely proud of each of these, and would like to share them with you.

  • Some thoughts about epsilon and delta (August 19, 2019) is a deep dive on student difficulties with a notoriously challenging definition from calculus. I got pretty scholarly and read a bunch of research for it, but the core of the post is a discussion of challenges faced by specific learners I’ve known, one of whom is my own self. I also include a brief history of this definition.
  • The things in proofs are weird: a note on student difficulties (May 20, 2020) is a meditation on the nature of the objects we use in proofs, and the difficulties students have in getting used to working with objects with this strange nature. I again got pretty scholarly and read a bunch of research. Nonetheless, it includes an extended riff on Abbott and Costello’s Who’s on First?
  • A K-pop dance routine and the false dilemma of concept vs. procedure (August 18, 2020) is a… ok let me back up. People used to fight about whether conceptual or procedural knowledge was more important. I think we’ve more or less reached a place in the public conversation about math teaching where there’s an official public consensus that conceptual and procedural knowledge are both important and are mutually supportive. But just because we all can say these words doesn’t mean we’ve necessarily fully reconciled the impulses behind that older fight. For example, in spite of firm intellectual conviction that this view is correct, I have a bias toward the conceptual in my teaching, in the sense that I have a strong tendency to assume any student difficulty is rooted in a conceptual difficulty. This bias is really useful a lot of the time, but sometimes it can lead me to misdiagnose what a student needs to move forward. Anyway, so one day I was learning a BLACKPINK dance and the learning experience just really eloquently illustrated both the advantages and disadvantages of that exact bias. Hopefully you’re intrigued!
  • The rapid expansion of online instruction, occasioned by the pandemic, has forced academia to contend with the limits of the control that its usual physical setup allows it to exercise over students’ movements and choices. One place this manifests very clearly is in the setting of timed tests, which are historically proctored in person. Remote proctoring: a failed experiment in control (January 19, 2021) is my heartfelt contribution to the pushback against the Orwellian trend of turning to “remote proctoring” (where the student is surveilled in their home during tests) to try to claw back the lost control, rather than accepting that the game has changed and rethinking assessment from the ground up, as the situation demands.
  • Three foundational theorems of elementary school math (November 22, 2021) could have been titled, “The logical structure of elementary school math is actually extremely beautiful and intricate, and I want everyone to pay more attention to this.” It’s a love letter to three closely related facts from elementary school math that I think often don’t get their due, making the case that they deserve to be thought of as theorems. I discuss proofs (including some relevant student work) and connections. (If any long-time readers of this blog are still here in 2021, this post is a distant but direct descendant of this post I wrote nearly 12 years ago, when I was a baby blogger.)

I also solicited a piece from Michael Pershan, which I am also extremely proud of:

  • What math professors and k-12 teachers think of each other (November 18, 2019) is Michael’s synthesis of and meditation on an informal survey he ran, canvassing math educators teaching in schools and universities about what they think about the differences in the shape of math education at these different levels. Michael’s characteristic thoughtfulness is on full display here, and it’s all with an eye toward how we can collaborate effectively. I love it.
Advertisement

Pershan’s Essay on Cognitive Load Theory

Just a note to point you to Michael Pershan’s motherf*cking gorgeous essay on the history of cognitive load theory, centered on its trailblazer, John Sweller.

Read it now.

I’m serious.

I tend to think of Sweller as, like, “that *sshole who thinks he can prove that it’s bad for learning if you think hard.”

On the other hand, any thoughtful teacher with any experience has seen students get overwhelmed by the demands of a problem and lose the forest for the trees, so you know that he’s talking about a real thing.

Michael has just tied it together for me, tracing how Sweller’s point of view was born and evolved, what imperatives it comes from, other researchers who take cognitive load theory in related and different directions, where their imperatives come from, and how Sweller’s relationship to these other directions has evolved as well. I have more empathy for him now, a better sense of his stance, and a better sense of why I see things so differently.

Probably the biggest surprise for me was seeing the connection between Sweller’s point of view on learning, and the imperatives he is beholden to as a scientist. I get so annoyed at the limited scope of his theory of learning, but apparently he defends this choice of scope on the grounds that it supports the scientific rigor of the work. I understand why he sees it that way.

The remaining confusion I have is why the Sweller of Michael’s account, ultimately so clear on the limited scope of his work (“not a theory of everything”) and the methodological reasons for this limited scope, nonetheless seems to feel so empowered to use it to talk about what is happening in schools and colleges. (See this for an example.) Relatedly, I’m having trouble reconciling this careful scientific-methodology-motivated scope limitation with Sweller’s stated goal (as quoted by Michael) to support the creation of new instructional techniques. The problem I’m having is this:

Is his real interest in supporting the work of the classroom or isn’t it?

If it is, well, then this squares both with the fact that he says it is, and that he’s so willing to jump into debates about instructional design as it is implemented in real classrooms. But it doesn’t square with rigorously limiting the scope of his theory, entirely avoiding conversations about obviously-relevant factors like motivation and productive difficulty, which he says he’s doing for reasons of scientific rigor, as in this quote:

Here is a brief history of germane cognitive load. The concept was introduced into CLT to indicate that we can devise instructional procedures that increase cognitive load by increasing what students learn. The problem was that the research literature immediately filled up with articles introducing new instructional procedures that worked and so were claimed to be due to germane cognitive load. That meant that all experimental results could be explained by CLT rendering the theory unfalsifiable. The simple solution that I use now is to never explain a result as being due to factors unrelated to working memory.

On the other hand, if his interest is purely in science, in mapping The Truth about the small part of the learning picture he’s chosen to focus on, then why does he claim he’s doing it all for the sake of instruction, and why does he feel he has something to say about the way instructional paradigms are playing out inside live classrooms?

Michael, help me out?

Lessons from Bowen and Darryl

At the JMM this year, I had the pleasure of attending a minicourse on “Designing and Implementing a Problem-Based Mathematics Course” taught by Bowen Kerins and Darryl Yong, the masterminds behind the legendary PCMI teachers’ program Developing Mathematics course, with a significant assist from Mary Pilgrim of Colorado State University.

I’ve been wanting to get a live taste of Bowen and Darryl’s work since at least 2010, when Jesse Johnson, Sam Shah, and Kate Nowak all came back from PCMI saying things like “that was the best math learning experience I’ve ever had,” and I started to have a look at those gorgeous problem sets. It was clear to me that they had done a lot of deep thinking about many of the central concerns of my own teaching. How to empower learners to get somewhere powerful and prespecified without cognitive theft. How to construct a learning experience that encourages learners to savor, to delectate. That simultaneously attends lovingly to the most and least empowered students in the room. &c.

I want to record here some new ideas I learned from Bowen and Darryl’s workshop. This is not exhaustive but I wanted to record them both for my own benefit and in the hopes that they’ll be useful to others. In the interest of keeping it short, I won’t talk about things I already knew about (such as their Important Stuff / Interesting Stuff / Tough Stuff distinction) even though they are awesome, and I’ll keep my own thoughts to a minimum. Here’s what I’ve got for you today:

1) The biggest takeaway for me was how exceedingly careful they are with people talking to the whole room. First of all, in classes that are 2 hours a day, full group discussions are always 10 minutes or less. Secondly, when students are talking to the room it is always students that Bowen and Darryl have preselected to present a specific idea they have already thought about. They never ask for hands, and they never cold-call. This means they already know more or less what the students are going to say. Thirdly, they have a distinction between students who try to burn through the work (“speed demons”) and students who work slowly enough to receive the gifts each question has to offer (“katamari,” because they pick things up as they roll along) – and the students who are asked to present an idea to the class are only katamari! Fourthly, a group discussion is only ever about a problem that everybody has already had a chance to think about – and even then, never about a problem for which everybody has come to the same conclusion the same way. Fifthly, in terms of selecting which ideas to have students present to the class, they concentrate on ideas that are nonstandard, or particularly visual, or both (rather than standard and/or algebraic).

This is for a number of reasons. First of all, the PCMI Developing Mathematics course has something like 70 participants. So part of it is the logistics of teaching such a large course. You lose control of the direction of ideas in the class very quickly if you let people start talking and don’t already know what they’re going to say. (Bowen: “you let them start just saying what’s on their mind, you die.”) But there are several other reasons as well, stemming (as I understood it anyway) from two fundamental questions: (a) for the people in the room who are listening, what purpose is being served / how well are their time and attention being used? and (b) what will the effect of listening to [whoever is addressing the room] be on participants’ sense of inclusion vs. exclusion, empowerment vs. disempowerment? Bowen and Darryl want somebody listening to a presentation to be able to engage it fluently (so it has to be about something they’ve already thought about) and to get something worthwhile out of it (so it can’t be about a problem everybody did the same way). And they want everybody listening to feel part of it, invited in, not excluded – which means that you can’t give anybody an opportunity to be too high-powered in front of everybody. (Bowen: “The students who want to share their super-powerful ideas need a place in the course to do that. We’ve found it’s best to have them do that individually, to you, when no one else can hear.”)

2) Closely related. Bowen talked at great length about the danger of people hearing somebody else say something they don’t understand or haven’t heard of and thinking, “I guess I can’t fully participate because I don’t know that idea or can’t follow that person.” It was clear that every aspect of the class was designed with this in mind. The control they exercise over what gets said to the whole room is one aspect of this. Another is the norm-setting they do. (Have a look at page 1 of this problem set for a sense of these norms.) Another is the way they structure the groups. (Never have a group that’s predominantly speed-demons with one or two katamari. If you have more speed-demons than katamari, you need some groups to be 100% speed demon.)

While this concern resonates with me (and I’m sure everybody who’s ever taught, esp. a highly heterogeneous group), I had not named it before, and I think I want to follow Bowen and Darryl’s lead in incorporating it more essentially into planning. In the past, I think my inclination has been to intervene after the fact when somebody says something that I think will make other people feel shut out of the knowledge. (“So-and-so is talking about such-and-such but you don’t need to know what they’re talking about in order to think about this.”) But then I’m only addressing the most obvious / loud instances of this dynamic, and even then, only once much of the damage has already been done. The point is that the damage is usually exceedingly quiet – only in the mind of somebody disempowering him or herself. You can’t count on yourself to spot this, you have to plan prophylactically.

3) Designing the problem sets specifically with groupwork in mind, Bowen and Darryl look for problems that encourage productive collaboration. For example, problems that are arduous to do by yourself but interesting to collaborate on. Or, problems that literally require collaboration in order to complete (such as the classic one of having students attempt to create fake coin-flip data, then generate real data, trade, and try to guess other students’ real vs. fake data).

4) And maybe my single favorite idea from the presentation was this: “If a student has a cool idea that you would like to have them present, consider instead incorporating that idea into the next day’s problem set.” I asked for an example, and Bowen mentioned the classic about summing the numbers from 1 to n. Many students solved the problem using the Gauss trick, but some students solved the problem with a more visual approach. Bowen and Darryl wanted everybody to see this and to have an opportunity to connect it to their own solution, but rather than have anybody present, they put a problem on the next day’s problem set asking for the area of a staircase diagram, using some of the same numbers that had been asked about the day before in the more traditional 1 + … + n form.

I hope some of these ideas are useful to you. I’d love to muse on how I might make use of them but I’m making myself stop. Discussion more than welcome in the comments though.

Uhm sayin

Dan Meyer’s most recent post is about how in order to motivate proof you need doubt.

This is something I was repeatedly and inchoately hollering about five years ago.

As usual I’m grateful for Dan’s cultivated ability to land the point cleanly and actionably. Looking at my writing from 5 years ago – it’s some of my best stuff! totally follow those links! – but it’s long and heady, and not easy to extract the action plan. So, thanks Dan, for giving this point (which I really care about) wings.

I have one thing to add to Dan’s post! Nothing I haven’t said before but let’s see if I can make it pithy so it can fly too.

Dan writes that an approach to proof that cultivates doubt has several advantages:

  1. It motivates proof
  2. It lowers the threshold for participation in the proof act
  3. It allows students to familiarize themselves with the vocabulary of proof and the act of proving
  4. It makes proving easier

I think it makes proving not only easier but way, way easier, and I have something to say about how.

Legitimate uncertainty and the internal compass for rigor

Anybody who has ever tried to teach proof knows that the work of novice provers on problems of the form “prove X” is often spectacularly, shockingly illogical. The intermediate steps don’t follow from the givens, don’t imply the desired conclusion, and don’t relate to each other.

I believe this happens for an extremely simple reason. And it’s not that the kids are dumb.

It happens because the students’ work is unrelated to their own sense of the truth! You told them to prove X given Y. To them, X and Y look about equally true. Especially since the problem setup literally informed them that both are true. Everything else in sight looks about equally true too.

There is no gradient of confidence anywhere. Thus they have no purchase on the geography of the truth. They are in a flat, featureless wilderness where all the directions look the same, and they have no compass. So they wander in haphazard zigzags! What the eff else can they do??

The situation is utterly different if there is any legitimate uncertainty in the room. Legitimate uncertainty is an amazing, magical, powerful force in a math classroom. When you don’t know and really want to know, directions of inquiry automatically get magnetized for you along gradients of confidence. You naturally take stock of what you know and use it to probe what you don’t know.

I call this the internal compass for rigor.

Everybody’s got one. The thing that distinguishes experienced provers is that we have spent a lot of time sensitizing ours and using it to guide us around the landscape of the truth, to the point where we can even feel it giving us a validity readout on logical arguments relating to things we already believe more or less completely. (This is why “prove X” is a productive type of exercise for a strong college math major or a graduate student, and why mathematicians agree that the twin prime conjecture hasn’t been proven yet even though everybody believes it.)

But novice provers don’t know how to feel that subtle tug yet. If you say “prove X” you are settling the truth question for them, and thereby severing their access to their internal compass for rigor.

Fortunately, the internal compass is capable of a much more powerful pull, and that’s when it’s actually giving you a readout on what to believe. Everybody can and does feel this pull. As soon as there’s something you don’t know and want to know, you feel it.

This means that often it’s enough merely to generate some legitimate mathematical uncertainty in the students, and some curiosity about it, and then just watch and wait. With maybe a couple judicious and well-thought-out hints at the ready if needed. But if the students resolve this legitimate uncertainty for themselves, well, then, they have probably more or less proven something. All you have to do is interview them about why they believe what they’ve concluded and you will hear something that sounds very much like a proof.

A Critical Language for Problem Design

I am at the Joint Mathematics Meetings this week. I had a conversation yesterday, with Cody L. Patterson, Yvonne Lai, and Aaron Hill, that was very exciting to me. Cody was proposing the development of what he called a “critical language of task design.”

This is an awesome idea.

But first, what does he mean?

He means giving (frankly, catchy) names to important attributes, types, and design principles, of mathematical tasks. I can best elucidate by example. Here are two words that Cody has coined in this connection, along with his definitions and illustrative examples.

Jamming – transitive verb. Posing a mathematical task in which the underlying concepts are essential, but the procedure cannot be used (e.g., due to insufficient information).

Example: you are teaching calculus. Your students have gotten good at differentiating polynomials using the power rule, but you have a sinking suspicion they have forgotten what the derivative is even really about. You give them a table like this

x f(x)
4 16
4.01 16.240901
4.1 18.491

and then ask for a reasonable estimate of f'(4). You are jamming the power rule because you’re giving them a problem that aims at the concept underlying the derivative and that cannot be solved with the power rule.

Thwarting – transitive verb. Posing a mathematical task in which mindless execution of the procedure is possible but likely to lead to a wrong answer.

Example: you are teaching area of simple plane figures. Your students have gotten good at area of parallelogram = base * height but you feel like they’re just going through the motions. You give them this parallelogram:
Thwarting
Of course they all try to find the area by 9\times 41. You are thwarting the thoughtless use of base * height because it gets the wrong answer in this case.

Why am I so into this? These are just two words, naming things that all teachers have probably done in some form or another without their ever having been named. They describe only a very tiny fraction of good tasks. What’s the big deal?

It’s that these words are a tiny beginning. We’re talking about a whole language of task design. I’m imagining having a conversation with a fellow educator, and having access to hundreds of different pedagogically powerful ideas like these, neatly packaged in catchy usable words. “I see you’re thwarting the quadratic formula pretty hard here, so I’m wondering if you want to balance it out with some splitting / smooshing / etc.” (I have no idea what those would mean but you get the idea.)

I have no doubt that a thoughtful, extensive and shared vocabulary of this kind would elevate our profession. It would be a concrete vehicle for the transmission and development of our shared expertise in designing mathematical experiences.

This notion has some antecedents.[1] First, there are the passes at articulating what makes a problem pedagogically valuable. On the math blogosphere, see discussions by Avery Pickford, Breedeen Murray, and Michael Pershan. (Edit 1/21: I knew Dan had one of these too.) I also would like to believe that there is a well-developed discussion on this topic in academic print journals, although I am unaware of it. (A google search turned up this methodologically odd but interesting-seeming article about biomed students. Is it the tip of the iceberg? Is anyone reading this acquainted with the relevant literature?)

Also, I know a few other actual words that fit into the category “specialized vocabulary to discuss math tasks and problems.” I forget where I first ran into the word problematic in this context – possibly in the work of Cathy Twomey-Fosnot and Math in the City – but that’s a great word. It means that the problem feels authentic and vital; the opposite of contrived. I also forget where I first heard the word grabby (synonymous with Pershan’s hooky, and not far from how Dan uses perplexing) to describe a math problem – maybe from the lips of Justin Lanier? But, once you know it it’s pretty indispensible. Jo Boaler, by way of Dan Meyer, has given us the equally indispensable pseudocontext. So, the ball is already rolling.

When Cody shared his ideas, Yvonne and I speculated that the folks responsible for the PCMI problem setsBowen Kerins and Darryl Yong, and their friends at the EDC – have some sort of internal shared vocabulary of problem design, since they are masters. They were giving a talk today, so I went, and asked this question. It wasn’t really the setting to get into it, but superficially it sounded like yes. For starters, the PCMI’s problem sets (if you are not familiar with them, click through the link above – you will not be sorry) all contain problems labeled important, neat and tough. “Important” means accessible, and also at the center of connections to many other problems. Darryl talked about the importance of making sure the “important” problems have a “low threshold, high ceiling” (a phrase I know I’ve heard before – anyone know where that comes from?). He said that Bowen talks about “arcs,” roughly meaning, mathematical themes that run through the problem sets, but I wanted to hear much more about that. Bowen, are you reading this? What else can you tell us?

Most of these words share with Cody’s coinages the quality of being catchy / natural-language-feeling. They are not jargony. In other words, they are inclusive rather than exclusive.[2] It is possible for me to imagine that they could become a shared vocabulary of our whole profession.

So now what I really want to ultimately happen is for a whole bunch of people (Cody, Yvonne, Bowen, you, me…) to put in some serious work and to write a book called A Critical Language for Mathematical Problem Design, that catalogues, organizes and elucidates a large and supple vocabulary to describe the design of mathematical problems and tasks. To get this out of the completely-idle-fantasy stage, can we do a little brainstorming in the comments? Let’s get a proof of concept going. What other concepts for thinking about task design can you describe and (jargonlessly) name?

I’m casting the net wide here. Cody’s “jamming” and “thwarting” are verbs describing ways that problems can interrupt the rote application of methods. “Problematic” and “grabby” are ways of describing desirable features of problems, while “pseudocontext” is a way to describe negative features. Bowen and Darryl’s “important/neat/tough” are ways to conceptualize a problem’s role in a whole problem set / course of instruction. I’m looking for any word that you could use, in any way, when discussing the design of math tasks. Got anything for me?

[1]In fairness, for all I know, somebody has written a book entitled A Critical Language for Mathematical Task Design. I doubt it, but just in case, feel free to get me a copy for my birthday.

[2]I am taking a perhaps-undeserved dig here at a number of in-many-ways-wonderful curriculum and instructional design initiatives that have a lot of rich and deep thought about pedagogy behind them but have really jargony names, such as Understanding by Design and Cognitively Guided Instruction. (To prove that an instructional design paradigm does not have to be jargony, consider Three-Acts.) I feel a bit ungenerous with this criticism, but I can’t completely shake the feeling that jargony names are a kind of exclusion: if you really wanted everybody to use your ideas, you would have given them a name you could imagine everybody saying.

Wherein This Blog Serves Its Original Function

The original inspiration for starting this blog was the following:

I read research articles and other writing on math education (and education more generally) when I can. I had been fantasizing (back in fall 2009) about keeping an annotated bibliography of articles I read, to defeat the feeling that I couldn’t remember what was in them a few months later. However, this is one of those virtuous side projects that I never seemed to get to. I had also met Kate Nowak and Jesse Johnson at a conference that summer, and due to Kate’s inspiration, Jesse had started blogging. The two ideas came together and clicked: I could keep my annotated bibliography as a blog, and then it would be more exciting and motivating.

That’s how I started, but while I’ve occasionally engaged in lengthy explication and analysis of a single piece of writing, this blog has never really been an annotated bibliography. EXCEPT FOR RIGHT THIS VERY SECOND. HA! Take THAT, Mr. Things-Never-Go-According-To-Plan Monster!

“Opportunities to Learn Reasoning and Proof in High School Mathematics Textbooks”, by Denisse R. Thompson, Sharon L. Senk, and Gwendolyn J. Johnson, published in the Journal for Research in Mathematics Education, Vol. 43 No. 3, May 2012, pp. 253-295

The authors looked at HS level textbooks from six series (Key Curriculum Press; Core Plus; UCSMP; and divisions of the major publishers Holt, Glencoe, and Prentice-Hall) and analyzed the lessons and problem sets from the point of view of “what are the opportunities to learn about proof?” To keep the project manageable they just looked at Alg. 1, Alg. 2 and Precalc books and focused on the lessons on exponents, logarithms and polynomials.

They cast the net wide, looking for any “proof-related reasoning,” not just actual proofs. For lessons, they were looking for any justification of stated results: either an actual proof, or a specific example that illustrated the method of the general argument, or an opportunity for students to fill in the argument. For exercise sets, they looked at problems that asked students to make or investigate a conjecture or evaluate an argument or find a mistake in an argument in addition to asking students to actually develop an argument.

In spite of this wide net, they found that:

* In the exposition, proof-related reasoning is common but lack of justification is equally common: across the textbook series, 40% of the mathematical assertions about the chosen topics were made without any form of justification;

* In the exercises, proof-related reasoning was exceedingly rare: across the textbook series, less than 6% of exercises involved any proof-related reasoning. Only 3% involved actually making or evaluating an argument.

* Core Plus had the greatest percentage of exercises with opportunities for students to develop an argument (7.5%), and also to engage in proof-related reasoning more generally (14.7%). Glencoe had the least (1.7% and 3.5% respectively). Key Curriculum Press had the greatest percentage of exercises with opportunities for students to make a conjecture (6.0%). Holt had the least (1.2%).

The authors conclude that mainstream curricular materials do not reflect the pride of place given to reasoning and proof in the education research literature and in curricular mandates.

“Expert and Novice Approaches to Reading Mathematical Proofs”, by Matthew Inglis and Lara Alcock, published in the Journal for Research in Mathematics Education, Vol. 43 No. 4, July 2012, pp. 358-390

The authors had groups of undergraduates and research mathematicians read several short, student-work-typed proofs of elementary theorems, and decide if the proofs were valid. They taped the participants’ eye movements to see where their attention was directed.

They found:

* The mathematicians did not have uniform agreement on the validity of the proofs. Some of the proofs had a clear mistake and then the mathematicians did agree, but others were more ambiguous. (The proofs that were used are in an appendix in the article so you can have a look for yourself if you have JSTOR or whatever.) The authors are interested in using this result to challenge the conventional wisdom that mathematicians have a strong shared standard for judging proofs. I am sympathetic to the project of recognizing the way that proof reading depends on context, but found this argument a little irritating. The proofs used by the authors look like student work: the sequence of ideas isn’t being communicated clearly. So it wasn’t the validity of a sequence of ideas that the participants evaluated, it was also the success of an imperfect attempt to communicate that sequence. Maybe this distinction is ultimately unsupportable, but I think it has to be acknowledged in order to give the idea that mathematicians have high levels of agreement about proofs its due. Nobody who espouses this really thinks that mathematicians are likely to agree on what counts as clear communication. Somehow the sequence of ideas has to be separated from the attempt to communicate it if this idea is to be legitimately tested.

* The undergraduates spent a higher percentage of the time looking at the formulas in the proofs and a lower percentage of time looking at the text, as compared with the mathematicians. The authors argue that this is not fully explained by the hypothesis that the students had more trouble processing the formulas, since the undergrads spent only slightly more time total on them. The mathematicians spent substantially more time on the text. The authors speculate that the students were not paying as much attention to the logic of the arguments, and that this pattern accounts for some of the notorious difficulty that students have in determining the validity of proofs.

* The mathematicians moved their focus back and forth between consecutive lines of the proofs more frequently than the undergrads did. The authors suggest that the mathematicians were doing this to try to infer the “implicit warrant” that justified the 2nd line from the 1st.

The authors are also interested in arguing that mathematicians’ introspective descriptions of their proof-validation behavior are not reliable. Their evidence is that previous research (Weber, 2008: “How mathematicians determine if an argument is a valid proof”, JRME 39, pp. 431-459) based on introspective descriptions of mathematicians found that mathematicians begin by reading quickly through a proof to get the overall structure, before going into the details; however, none of the mathematicians in the present study did this according to their eye data. One of them stated that she does this in her informal debrief after the study, but her eye data didn’t indicate that she did it here. Again I’m sympathetic to the project of shaking up conventional wisdom, and there is lots of research in other fields to suggest that experts are not generally expert at describing their expert behavior, and I think it’s great when we (mathematicians or anyone else) have it pointed out to us that we aren’t right about everything. But I don’t feel the authors have quite got the smoking gun they claim to have. As they acknowledge in the study, the proofs they used are all really short. These aren’t the proofs to test the quick-read-thru hypothesis on.

The authors conclude by suggesting that when attempting to teach students how to read proofs, it might be useful to explicitly teach them to mimic the major difference found between novices and experts in the study: in particular, the idea is to teach them to ask themselves if a “warrant” is required to get from one line to the next, to try to come up with one if it is, and then to evaluate it. This idea seems interesting to me, especially in any class where students are expected to read a text containing proofs. (The authors are also calling for research that tests the efficacy of this idea.)

The authors also suggest ways that proof-writing could be changed to make it easier for non-experts to determine validity. They suggest (a) reducing the amount of symbolism to prevent students being distracted by it, and (b) making the between-line warrants more explicit. These ideas strike me as ridiculous. Texts already differ dramatically with respect to (a) and (b), there is no systemic platform from which to influence proof-writing anyway, and in any case as the authors rightly note, there are also costs to both, so the sweet spot in terms of text / symbolism balance isn’t at all clear and neither is the implicit / explicit balance. Maybe I’m being mean.

Dispatches from the Learning Lab: Partial Understanding

So here’s another one that I suppose is kind of obvious, but nonetheless feels like big, important news to me:

It’s possible to only partly understand what somebody else is saying.

Let me be more specific. When you’re explaining something to me, it’s possible for me to get some idea from it in a clear way, to the point where my understanding registers on my face, but nonetheless the other 7 ideas you were describing I have no idea what you’re talking about.

<Example>

I am a 9th grader in your Algebra I class. You’re teaching me about linear functions. You are explaining to the class how to find the y-intercept of a linear function, in slope-intercept form, given that the slope is 4 and the point (6,11) lies on the line. You explain that the equation has the form y=mx+b and that because we know the point (6,11) is on the line, that this point satisfies the equation. Thus you write

11=4\cdot 6+b

on the board. At this point I recognize that we are trying to find b and that we have an easy single-variable linear equation to solve. My face lights up and you take mental note of my engagement. Maybe you even ask for the y-intercept, and since I recognize that this must be b I calculate 11-24 = -13 and raise my hand.

Meanwhile, I have only the vaguest sense of the meaning of the phrase “y-intercept.” I have literally no understanding of why I should expect the equation to have the form y=mx+b. I have a nagging feeling of dissatisfaction ever since you substituted (6,11) into the equation because I thought x and y were supposed to be the variables but now it looks like b is the variable. Most importantly, I do not understand that the presence of the point on the line implies that its coordinates satisfy the equation of the line and conversely, because on a very basic level I don’t understand what the graph of the function is a picture of. This has been bothering me ever since we started the unit, when you had me plug in a bunch of x values into some equations and obtain corresponding y values, graph them, and then draw a solid line connecting the three or four points. Why am I drawing these lines? What are they pictures of?

Occasionally, I’ve asked a question aimed at getting clarity on some of these basic points. “How did you know to put the 6 and 11 into the equation?” But because I can’t be articulate about what I don’t understand, since I don’t understand it, and you can’t hear what I’m missing in my questions because the the theory is complete and whole in your mind, these attempts come to the same unsatisfying conclusion every time. You explain again; I frown; you explain a different way; I say, “I don’t understand.” You, I, and everyone else grow uncomfortable as the impasse continues. Eventually, you offer some thought that has something in it for me to latch onto, just as I latched onto solving for b before. Just to dispel the tension and let you get on with your job, I say, “Ah! Yes, I understand.”

</Example>

This example is my attempt to translate a few experiences I’ve had this semester into the setting of high school. The behavior of the student in that last paragraph was typical of me in these situations, though it would be atypical from a high school student, drawing as it does on the resources of my adulthood and educator background to self-advocate, to tolerate awkwardness, even to be aware that my understanding was incomplete. Still, often enough I ended up copping out as the student does above, understanding one of the 8 things that were going on, and latching onto it just so I could allow myself, the teacher and the class to move on gracefully. Conversations with other students indicated that my sense of incomplete understanding was entirely typical, even if my self-advocacy was not.

The take-home lesson is two-fold. Point one is about the limitations of explaining as a method of teaching. Point two is about the limitations of trusting your students’ (verbal or implied) response to your (verbal or implied) question, Do you understand?

The basic answer (as you can tell from the example) is, No, I don’t.

Now I myself love explaining and have done a great deal of it as a teacher. I fancy myself an extremely clear and articulate explainer. But it couldn’t be more abundantly clear, from this side of the desk, how limited is the experience of being explained to. I mean, actually it’s a great, key, important way to learn, but only in small doses and when I’m ready for it, when the groundwork for what you have to say has been properly set.

I am somewhat chastened by this. I am thinking back self-consciously to times when I’ve explained my students’ ears off rather than, in the immortal words of Shawn Cornally, “lay off and let them fucking think for a second.” It’s like I was too taken with the clarity and beauty of the formulation I was offering, or in too much of a hurry to let them work through what they had to work through, or in all likelihood both, to see that more words weren’t going to do any good. Beyond this, I’m thinking back on the faith I’ve put in my ability to read students’ level of understanding from their faces. I maintain that I’m way better at this than my professors, but I don’t think I’ve had enough respect for how you can understand a small part of something and have that feel like a big enough deal to say, and mean, “Oh I get it.” Or to understand a tiny part of something and use that as cover for not understanding the rest.

Dispatches from the Learning Lab: Inauthentic Agreement

Here’s another one. It should be quick.

When a student says, “Is it like this?” or the equivalent, I used to err on the side of “yes.” I.e. even if I wasn’t sure exactly what they were saying, but I thought it sounded like it might make sense. I think this was somewhat a function of the fact that I adopted a generally encouraging posture (this is my personality but also a deliberate choice), but it itself was just sort of my reflexive response from within this posture (not a deliberate choice).

It never felt quite right, so over time I trained myself instead to say things like, “I can’t understand what you’re saying but I think you might be onto something, but I’m not sure.” I never had concrete evidence that my original response was doing something unhelpful though.

Now I do. In a recent conversation with one of my teachers, several times I said, “Let me explain back to you what I think you’re saying, and you tell me if it’s right…” And he said, “yes yes yes it’s like…” But I didn’t recognize my attempted explanation in what he seemed to be saying yes to. So, it’s official: this is TOTALLY UNHELPFUL. I’m disoriented; that’s why I asked the question. Unless I come away from your answer feeling sure that you understood me, your “yes” only serves to make me more disoriented.

Take-home lesson. Never say “yes” unless you are sure you have understood fully what the student is saying, and agree with it. As I’ve often discussed before, sometimes a “yes” is inappropriate even then; for example if there’s a danger that the student is trying to foist onto you the work of judging for her or himself. But if you have any doubt, then the “yes” is definitely inappropriate: the encouragement is fake, and the student is left being equally unsure as before, and now also having exhausted the resource of checking with you. Retrospectively the only student who even feels good hearing the “yes” in this situation is the one who is playing a Clever Hans game, and in this case it does him or her the disservice of encouraging the game.

Dispatches from the Learning Lab: Why I Don’t Always Ask My Question

One of the many reasons I put myself in a math PhD program is that it is an intense full-time laboratory in which for me to examine my own learning process, and my experience as a participant in math classrooms from the student side. I hope to record many lessons from this laboratory on this blog. Here is one.

As a teacher I have always strongly encouraged people to pipe up when they’re confused, whether working in groups or (especially) at the level of whole-class discussion. To encourage this, I do things like:

* I leave lots of wait time.
* I respond to questions (especially those expressing confusion) with enthusiasm when they are asked, and after they are discussed I point out concrete, specific ways in which the questions advanced the conversation.
* I give (very deeply felt) pep talks about the value of these questions.
* Sometimes I directly solicit questions from people whose faces make it seem like they have one.

I am behind all of these practices. However, in every class that I have taught, whether for students or teachers, including all those of an extended enough length so that the practices would have time to shape the culture, it has always seemed to me that participants are often not asking their questions. This has puzzled me a bit. I’ve generally responded by trying harder: leaving longer wait-time, making more of a point to highlight the value of questions when they happen, giving more strident and frequent pep talks. This hasn’t resolved the matter.

Now I am not about to pronounce a new solution. But I have what for me is a very new insight. I imagine some readers of this blog will read it and be like, “Ben, I could have told you that.” I’m sure you could have, but this wouldn’t have helped me: retrospectively, students have told me it many, many times. But I didn’t get it till I felt it. This is the value of putting yourself in their position.

What I’ve realized since beginning graduate school is that I had an incomplete understanding of why students don’t ask questions. I believed that the only reason not to ask a question is the fear of looking dumb. My approach has been entirely aimed at ameliorating this fear and replacing it with the sense that questions are honored and their contribution is valued.

Now one of the great advantages of going to grad school as an adult, rather than going fresh out of college, is that I have very, very little fear of looking dumb. (In the immortal words of my friend Kiku Polk, you get your “f*ck you” at 30.) To all my early-20’s people: your 20’s will be wonderful but if you make sure you keep growing, your 30’s will be better.

And one of the great advantages of going to grad school after over a decade as a teacher, is that I have a strong commitment to asking my questions, stemming from the value that I know they have both for myself and the class.

Perhaps as a consequence, I found that in all four of my classes last semester, I asked more questions than anyone else in the room.

Be that as it may, I frequently didn’t ask my questions.

What’s up?

As it turns out (and now, okay, maybe this is Captain Obvious talking, but a propos of all of the above, somehow I’ve been overlooking it for a decade), not wanting to hold up class is its own reason not to ask questions! Maybe it’s a basic piece of our social programming. If things are going one way in a room of 20 or 30 people, it feels sort of painful to contemplate forcing them in another direction on your account. Especially if you’ve already done it once or twice, but even if not. And more so the further your question seems to be from what the people around you (esp. the teacher) look like they want to talk about. All this is intensified if you’re not sure your question is going to come out perfectly articulate – not (necessarily or only) because of how this will make you look, but because you know that your interruption is going to take up more mental and social space if it has to start with a whole period of everybody just getting clear on what you’re even asking.

There is an added layer that it is often perceptible that the teacher desires for everyone to understand and appreciate what was just said as clearly as she or he understands and appreciates it. Last night I was in a lecture in which I was hyperaware of not always asking my questions, and part of the dynamic in that case was actually the professor’s enthusiasm about what he was saying! I did ask a number of questions, but one reason I didn’t ask more is that I sort of felt like I was crashing his party! My warm feelings toward this professor actually heightened this effect: messing up someone else’s flow is worse when it’s someone you like.

As I mentioned above, students have been trying to tell me this for years. I never got it, because on some level I always believed that the real problem was that they were afraid to look dumb. I remember a conversation with a particular student who was my advisee as well as my math student. When I pressed her on asking more questions in class, she said something to the effect of, “you know, you’re doing your thing up there, and I don’t want to get in the way.” I literally remember the voice in my head reinterpreting this as a lack of belief in herself. Now I think that that was part of it as well; but my response was all aimed at that, and so didn’t address the whole issue.

Now my process of figuring out how to operationalize this new insight in terms of teaching practice has only just begun, and one reason I am writing about this here is to invite you into this process. I am certainly NOT telling you to withhold your enthusiasm on the grounds that it might make kids not want to interrupt you with questions. Furthermore, evidently when I describe experiences from my graduate classes, I am describing a situation in which the measures you and I have been taking for years to encourage question-asking are mostly absent. I doubt most of my professors have even heard of wait time. Nonetheless, I am sure that this new point of view is fruitful in terms of actual practice. Below are my preliminary thoughts. Please comment.

If I want to really encourage question asking, what I have been doing (aimed at building a culture of question-asking) is necessary, but insufficient. It is also necessary to think about lesson structure with an eye to: how do I design the flow of this lesson so that (at least during significant parts where questions are likely to arise in students’ minds) asking their questions does not feel like an interruption? One model, which is valuable in other ways as well, is to have students’ questions be the desired product of a certain segment of class. For example, when the lesson arrives at a key idea, definition, or conclusion, ask students to turn to their neighbors and discuss the key idea and try to produce a question about it. Then have the pairs or groups report their questions. This way, the questions cannot be interruptions because they are explicitly the very thing that is supposed to be going on right then.

I like this idea but it has limited scope because it requires the point in the lesson at which the questions arise to be planned, and of course this can never contain all the questions I would want to have asked. Another thing to think about is the matter of momentum. I think my discussion of enthusiasm above really revolves around momentum. Enthusiasm generates momentum, but momentum is actually the thing that it hurts to get in the way of. Therefore I submit a second idea: the question of managing my/your own and the class’s momentum. Having forward momentum is obviously a big part of class being engaging, but perhaps it also suppresses spontaneous questions? Or under certain conditions it does?

(In a way this reminds me of the tension – one I am much more confident is an essential one of our profession – between storytelling and avoidance of theft – I discussed a particular case of this tension in the fourth paragraph here. Momentum is aligned with storytelling: a good story generates momentum. Avoiding theft is aligned with inviting questions.)

A last thought is that in a class of 20 or 30, having the class engage every question that pops into any student’s head at any time is obviously not a desirable situation. You might think I thought it was desirable based on the above. But the question is how to empower students to ask questions when we want them. I know that I for one have often known I wanted some questions so I could be responsive to them, and they weren’t forthcoming. The question is about how to change this. Part of the answer is about the culture, valuing the questions, encouraging the risks, and making everyone feel safe; but it’s the other part – how to structurally support the questions – that’s the new inquiry for me. As I said above, please comment.