Pershan’s Essay on Cognitive Load Theory

Just a note to point you to Michael Pershan’s motherf*cking gorgeous essay on the history of cognitive load theory, centered on its trailblazer, John Sweller.

Read it now.

I’m serious.

I tend to think of Sweller as, like, “that *sshole who thinks he can prove that it’s bad for learning if you think hard.”

On the other hand, any thoughtful teacher with any experience has seen students get overwhelmed by the demands of a problem and lose the forest for the trees, so you know that he’s talking about a real thing.

Michael has just tied it together for me, tracing how Sweller’s point of view was born and evolved, what imperatives it comes from, other researchers who take cognitive load theory in related and different directions, where their imperatives come from, and how Sweller’s relationship to these other directions has evolved as well. I have more empathy for him now, a better sense of his stance, and a better sense of why I see things so differently.

Probably the biggest surprise for me was seeing the connection between Sweller’s point of view on learning, and the imperatives he is beholden to as a scientist. I get so annoyed at the limited scope of his theory of learning, but apparently he defends this choice of scope on the grounds that it supports the scientific rigor of the work. I understand why he sees it that way.

The remaining confusion I have is why the Sweller of Michael’s account, ultimately so clear on the limited scope of his work (“not a theory of everything”) and the methodological reasons for this limited scope, nonetheless seems to feel so empowered to use it to talk about what is happening in schools and colleges. (See this for an example.) Relatedly, I’m having trouble reconciling this careful scientific-methodology-motivated scope limitation with Sweller’s stated goal (as quoted by Michael) to support the creation of new instructional techniques. The problem I’m having is this:

Is his real interest in supporting the work of the classroom or isn’t it?

If it is, well, then this squares both with the fact that he says it is, and that he’s so willing to jump into debates about instructional design as it is implemented in real classrooms. But it doesn’t square with rigorously limiting the scope of his theory, entirely avoiding conversations about obviously-relevant factors like motivation and productive difficulty, which he says he’s doing for reasons of scientific rigor, as in this quote:

Here is a brief history of germane cognitive load. The concept was introduced into CLT to indicate that we can devise instructional procedures that increase cognitive load by increasing what students learn. The problem was that the research literature immediately filled up with articles introducing new instructional procedures that worked and so were claimed to be due to germane cognitive load. That meant that all experimental results could be explained by CLT rendering the theory unfalsifiable. The simple solution that I use now is to never explain a result as being due to factors unrelated to working memory.

On the other hand, if his interest is purely in science, in mapping The Truth about the small part of the learning picture he’s chosen to focus on, then why does he claim he’s doing it all for the sake of instruction, and why does he feel he has something to say about the way instructional paradigms are playing out inside live classrooms?

Michael, help me out?

Advertisements

Lessons from Bowen and Darryl

At the JMM this year, I had the pleasure of attending a minicourse on “Designing and Implementing a Problem-Based Mathematics Course” taught by Bowen Kerins and Darryl Yong, the masterminds behind the legendary PCMI teachers’ program Developing Mathematics course, with a significant assist from Mary Pilgrim of Colorado State University.

I’ve been wanting to get a live taste of Bowen and Darryl’s work since at least 2010, when Jesse Johnson, Sam Shah, and Kate Nowak all came back from PCMI saying things like “that was the best math learning experience I’ve ever had,” and I started to have a look at those gorgeous problem sets. It was clear to me that they had done a lot of deep thinking about many of the central concerns of my own teaching. How to empower learners to get somewhere powerful and prespecified without cognitive theft. How to construct a learning experience that encourages learners to savor, to delectate. That simultaneously attends lovingly to the most and least empowered students in the room. &c.

I want to record here some new ideas I learned from Bowen and Darryl’s workshop. This is not exhaustive but I wanted to record them both for my own benefit and in the hopes that they’ll be useful to others. In the interest of keeping it short, I won’t talk about things I already knew about (such as their Important Stuff / Interesting Stuff / Tough Stuff distinction) even though they are awesome, and I’ll keep my own thoughts to a minimum. Here’s what I’ve got for you today:

1) The biggest takeaway for me was how exceedingly careful they are with people talking to the whole room. First of all, in classes that are 2 hours a day, full group discussions are always 10 minutes or less. Secondly, when students are talking to the room it is always students that Bowen and Darryl have preselected to present a specific idea they have already thought about. They never ask for hands, and they never cold-call. This means they already know more or less what the students are going to say. Thirdly, they have a distinction between students who try to burn through the work (“speed demons”) and students who work slowly enough to receive the gifts each question has to offer (“katamari,” because they pick things up as they roll along) – and the students who are asked to present an idea to the class are only katamari! Fourthly, a group discussion is only ever about a problem that everybody has already had a chance to think about – and even then, never about a problem for which everybody has come to the same conclusion the same way. Fifthly, in terms of selecting which ideas to have students present to the class, they concentrate on ideas that are nonstandard, or particularly visual, or both (rather than standard and/or algebraic).

This is for a number of reasons. First of all, the PCMI Developing Mathematics course has something like 70 participants. So part of it is the logistics of teaching such a large course. You lose control of the direction of ideas in the class very quickly if you let people start talking and don’t already know what they’re going to say. (Bowen: “you let them start just saying what’s on their mind, you die.”) But there are several other reasons as well, stemming (as I understood it anyway) from two fundamental questions: (a) for the people in the room who are listening, what purpose is being served / how well are their time and attention being used? and (b) what will the effect of listening to [whoever is addressing the room] be on participants’ sense of inclusion vs. exclusion, empowerment vs. disempowerment? Bowen and Darryl want somebody listening to a presentation to be able to engage it fluently (so it has to be about something they’ve already thought about) and to get something worthwhile out of it (so it can’t be about a problem everybody did the same way). And they want everybody listening to feel part of it, invited in, not excluded – which means that you can’t give anybody an opportunity to be too high-powered in front of everybody. (Bowen: “The students who want to share their super-powerful ideas need a place in the course to do that. We’ve found it’s best to have them do that individually, to you, when no one else can hear.”)

2) Closely related. Bowen talked at great length about the danger of people hearing somebody else say something they don’t understand or haven’t heard of and thinking, “I guess I can’t fully participate because I don’t know that idea or can’t follow that person.” It was clear that every aspect of the class was designed with this in mind. The control they exercise over what gets said to the whole room is one aspect of this. Another is the norm-setting they do. (Have a look at page 1 of this problem set for a sense of these norms.) Another is the way they structure the groups. (Never have a group that’s predominantly speed-demons with one or two katamari. If you have more speed-demons than katamari, you need some groups to be 100% speed demon.)

While this concern resonates with me (and I’m sure everybody who’s ever taught, esp. a highly heterogeneous group), I had not named it before, and I think I want to follow Bowen and Darryl’s lead in incorporating it more essentially into planning. In the past, I think my inclination has been to intervene after the fact when somebody says something that I think will make other people feel shut out of the knowledge. (“So-and-so is talking about such-and-such but you don’t need to know what they’re talking about in order to think about this.”) But then I’m only addressing the most obvious / loud instances of this dynamic, and even then, only once much of the damage has already been done. The point is that the damage is usually exceedingly quiet – only in the mind of somebody disempowering him or herself. You can’t count on yourself to spot this, you have to plan prophylactically.

3) Designing the problem sets specifically with groupwork in mind, Bowen and Darryl look for problems that encourage productive collaboration. For example, problems that are arduous to do by yourself but interesting to collaborate on. Or, problems that literally require collaboration in order to complete (such as the classic one of having students attempt to create fake coin-flip data, then generate real data, trade, and try to guess other students’ real vs. fake data).

4) And maybe my single favorite idea from the presentation was this: “If a student has a cool idea that you would like to have them present, consider instead incorporating that idea into the next day’s problem set.” I asked for an example, and Bowen mentioned the classic about summing the numbers from 1 to n. Many students solved the problem using the Gauss trick, but some students solved the problem with a more visual approach. Bowen and Darryl wanted everybody to see this and to have an opportunity to connect it to their own solution, but rather than have anybody present, they put a problem on the next day’s problem set asking for the area of a staircase diagram, using some of the same numbers that had been asked about the day before in the more traditional 1 + … + n form.

I hope some of these ideas are useful to you. I’d love to muse on how I might make use of them but I’m making myself stop. Discussion more than welcome in the comments though.

Uhm sayin

Dan Meyer’s most recent post is about how in order to motivate proof you need doubt.

This is something I was repeatedly and inchoately hollering about five years ago.

As usual I’m grateful for Dan’s cultivated ability to land the point cleanly and actionably. Looking at my writing from 5 years ago – it’s some of my best stuff! totally follow those links! – but it’s long and heady, and not easy to extract the action plan. So, thanks Dan, for giving this point (which I really care about) wings.

I have one thing to add to Dan’s post! Nothing I haven’t said before but let’s see if I can make it pithy so it can fly too.

Dan writes that an approach to proof that cultivates doubt has several advantages:

  1. It motivates proof
  2. It lowers the threshold for participation in the proof act
  3. It allows students to familiarize themselves with the vocabulary of proof and the act of proving
  4. It makes proving easier

I think it makes proving not only easier but way, way easier, and I have something to say about how.

Legitimate uncertainty and the internal compass for rigor

Anybody who has ever tried to teach proof knows that the work of novice provers on problems of the form “prove X” is often spectacularly, shockingly illogical. The intermediate steps don’t follow from the givens, don’t imply the desired conclusion, and don’t relate to each other.

I believe this happens for an extremely simple reason. And it’s not that the kids are dumb.

It happens because the students’ work is unrelated to their own sense of the truth! You told them to prove X given Y. To them, X and Y look about equally true. Especially since the problem setup literally informed them that both are true. Everything else in sight looks about equally true too.

There is no gradient of confidence anywhere. Thus they have no purchase on the geography of the truth. They are in a flat, featureless wilderness where all the directions look the same, and they have no compass. So they wander in haphazard zigzags! What the eff else can they do??

The situation is utterly different if there is any legitimate uncertainty in the room. Legitimate uncertainty is an amazing, magical, powerful force in a math classroom. When you don’t know and really want to know, directions of inquiry automatically get magnetized for you along gradients of confidence. You naturally take stock of what you know and use it to probe what you don’t know.

I call this the internal compass for rigor.

Everybody’s got one. The thing that distinguishes experienced provers is that we have spent a lot of time sensitizing ours and using it to guide us around the landscape of the truth, to the point where we can even feel it giving us a validity readout on logical arguments relating to things we already believe more or less completely. (This is why “prove X” is a productive type of exercise for a strong college math major or a graduate student, and why mathematicians agree that the twin prime conjecture hasn’t been proven yet even though everybody believes it.)

But novice provers don’t know how to feel that subtle tug yet. If you say “prove X” you are settling the truth question for them, and thereby severing their access to their internal compass for rigor.

Fortunately, the internal compass is capable of a much more powerful pull, and that’s when it’s actually giving you a readout on what to believe. Everybody can and does feel this pull. As soon as there’s something you don’t know and want to know, you feel it.

This means that often it’s enough merely to generate some legitimate mathematical uncertainty in the students, and some curiosity about it, and then just watch and wait. With maybe a couple judicious and well-thought-out hints at the ready if needed. But if the students resolve this legitimate uncertainty for themselves, well, then, they have probably more or less proven something. All you have to do is interview them about why they believe what they’ve concluded and you will hear something that sounds very much like a proof.

A Critical Language for Problem Design

I am at the Joint Mathematics Meetings this week. I had a conversation yesterday, with Cody L. Patterson, Yvonne Lai, and Aaron Hill, that was very exciting to me. Cody was proposing the development of what he called a “critical language of task design.”

This is an awesome idea.

But first, what does he mean?

He means giving (frankly, catchy) names to important attributes, types, and design principles, of mathematical tasks. I can best elucidate by example. Here are two words that Cody has coined in this connection, along with his definitions and illustrative examples.

Jamming – transitive verb. Posing a mathematical task in which the underlying concepts are essential, but the procedure cannot be used (e.g., due to insufficient information).

Example: you are teaching calculus. Your students have gotten good at differentiating polynomials using the power rule, but you have a sinking suspicion they have forgotten what the derivative is even really about. You give them a table like this

x f(x)
4 16
4.01 16.240901
4.1 18.491

and then ask for a reasonable estimate of f'(4). You are jamming the power rule because you’re giving them a problem that aims at the concept underlying the derivative and that cannot be solved with the power rule.

Thwarting – transitive verb. Posing a mathematical task in which mindless execution of the procedure is possible but likely to lead to a wrong answer.

Example: you are teaching area of simple plane figures. Your students have gotten good at area of parallelogram = base * height but you feel like they’re just going through the motions. You give them this parallelogram:
Thwarting
Of course they all try to find the area by 9\times 41. You are thwarting the thoughtless use of base * height because it gets the wrong answer in this case.

Why am I so into this? These are just two words, naming things that all teachers have probably done in some form or another without their ever having been named. They describe only a very tiny fraction of good tasks. What’s the big deal?

It’s that these words are a tiny beginning. We’re talking about a whole language of task design. I’m imagining having a conversation with a fellow educator, and having access to hundreds of different pedagogically powerful ideas like these, neatly packaged in catchy usable words. “I see you’re thwarting the quadratic formula pretty hard here, so I’m wondering if you want to balance it out with some splitting / smooshing / etc.” (I have no idea what those would mean but you get the idea.)

I have no doubt that a thoughtful, extensive and shared vocabulary of this kind would elevate our profession. It would be a concrete vehicle for the transmission and development of our shared expertise in designing mathematical experiences.

This notion has some antecedents.[1] First, there are the passes at articulating what makes a problem pedagogically valuable. On the math blogosphere, see discussions by Avery Pickford, Breedeen Murray, and Michael Pershan. (Edit 1/21: I knew Dan had one of these too.) I also would like to believe that there is a well-developed discussion on this topic in academic print journals, although I am unaware of it. (A google search turned up this methodologically odd but interesting-seeming article about biomed students. Is it the tip of the iceberg? Is anyone reading this acquainted with the relevant literature?)

Also, I know a few other actual words that fit into the category “specialized vocabulary to discuss math tasks and problems.” I forget where I first ran into the word problematic in this context – possibly in the work of Cathy Twomey-Fosnot and Math in the City – but that’s a great word. It means that the problem feels authentic and vital; the opposite of contrived. I also forget where I first heard the word grabby (synonymous with Pershan’s hooky, and not far from how Dan uses perplexing) to describe a math problem – maybe from the lips of Justin Lanier? But, once you know it it’s pretty indispensible. Jo Boaler, by way of Dan Meyer, has given us the equally indispensable pseudocontext. So, the ball is already rolling.

When Cody shared his ideas, Yvonne and I speculated that the folks responsible for the PCMI problem setsBowen Kerins and Darryl Yong, and their friends at the EDC – have some sort of internal shared vocabulary of problem design, since they are masters. They were giving a talk today, so I went, and asked this question. It wasn’t really the setting to get into it, but superficially it sounded like yes. For starters, the PCMI’s problem sets (if you are not familiar with them, click through the link above – you will not be sorry) all contain problems labeled important, neat and tough. “Important” means accessible, and also at the center of connections to many other problems. Darryl talked about the importance of making sure the “important” problems have a “low threshold, high ceiling” (a phrase I know I’ve heard before – anyone know where that comes from?). He said that Bowen talks about “arcs,” roughly meaning, mathematical themes that run through the problem sets, but I wanted to hear much more about that. Bowen, are you reading this? What else can you tell us?

Most of these words share with Cody’s coinages the quality of being catchy / natural-language-feeling. They are not jargony. In other words, they are inclusive rather than exclusive.[2] It is possible for me to imagine that they could become a shared vocabulary of our whole profession.

So now what I really want to ultimately happen is for a whole bunch of people (Cody, Yvonne, Bowen, you, me…) to put in some serious work and to write a book called A Critical Language for Mathematical Problem Design, that catalogues, organizes and elucidates a large and supple vocabulary to describe the design of mathematical problems and tasks. To get this out of the completely-idle-fantasy stage, can we do a little brainstorming in the comments? Let’s get a proof of concept going. What other concepts for thinking about task design can you describe and (jargonlessly) name?

I’m casting the net wide here. Cody’s “jamming” and “thwarting” are verbs describing ways that problems can interrupt the rote application of methods. “Problematic” and “grabby” are ways of describing desirable features of problems, while “pseudocontext” is a way to describe negative features. Bowen and Darryl’s “important/neat/tough” are ways to conceptualize a problem’s role in a whole problem set / course of instruction. I’m looking for any word that you could use, in any way, when discussing the design of math tasks. Got anything for me?

[1]In fairness, for all I know, somebody has written a book entitled A Critical Language for Mathematical Task Design. I doubt it, but just in case, feel free to get me a copy for my birthday.

[2]I am taking a perhaps-undeserved dig here at a number of in-many-ways-wonderful curriculum and instructional design initiatives that have a lot of rich and deep thought about pedagogy behind them but have really jargony names, such as Understanding by Design and Cognitively Guided Instruction. (To prove that an instructional design paradigm does not have to be jargony, consider Three-Acts.) I feel a bit ungenerous with this criticism, but I can’t completely shake the feeling that jargony names are a kind of exclusion: if you really wanted everybody to use your ideas, you would have given them a name you could imagine everybody saying.

Wherein This Blog Serves Its Original Function

The original inspiration for starting this blog was the following:

I read research articles and other writing on math education (and education more generally) when I can. I had been fantasizing (back in fall 2009) about keeping an annotated bibliography of articles I read, to defeat the feeling that I couldn’t remember what was in them a few months later. However, this is one of those virtuous side projects that I never seemed to get to. I had also met Kate Nowak and Jesse Johnson at a conference that summer, and due to Kate’s inspiration, Jesse had started blogging. The two ideas came together and clicked: I could keep my annotated bibliography as a blog, and then it would be more exciting and motivating.

That’s how I started, but while I’ve occasionally engaged in lengthy explication and analysis of a single piece of writing, this blog has never really been an annotated bibliography. EXCEPT FOR RIGHT THIS VERY SECOND. HA! Take THAT, Mr. Things-Never-Go-According-To-Plan Monster!

“Opportunities to Learn Reasoning and Proof in High School Mathematics Textbooks”, by Denisse R. Thompson, Sharon L. Senk, and Gwendolyn J. Johnson, published in the Journal for Research in Mathematics Education, Vol. 43 No. 3, May 2012, pp. 253-295

The authors looked at HS level textbooks from six series (Key Curriculum Press; Core Plus; UCSMP; and divisions of the major publishers Holt, Glencoe, and Prentice-Hall) and analyzed the lessons and problem sets from the point of view of “what are the opportunities to learn about proof?” To keep the project manageable they just looked at Alg. 1, Alg. 2 and Precalc books and focused on the lessons on exponents, logarithms and polynomials.

They cast the net wide, looking for any “proof-related reasoning,” not just actual proofs. For lessons, they were looking for any justification of stated results: either an actual proof, or a specific example that illustrated the method of the general argument, or an opportunity for students to fill in the argument. For exercise sets, they looked at problems that asked students to make or investigate a conjecture or evaluate an argument or find a mistake in an argument in addition to asking students to actually develop an argument.

In spite of this wide net, they found that:

* In the exposition, proof-related reasoning is common but lack of justification is equally common: across the textbook series, 40% of the mathematical assertions about the chosen topics were made without any form of justification;

* In the exercises, proof-related reasoning was exceedingly rare: across the textbook series, less than 6% of exercises involved any proof-related reasoning. Only 3% involved actually making or evaluating an argument.

* Core Plus had the greatest percentage of exercises with opportunities for students to develop an argument (7.5%), and also to engage in proof-related reasoning more generally (14.7%). Glencoe had the least (1.7% and 3.5% respectively). Key Curriculum Press had the greatest percentage of exercises with opportunities for students to make a conjecture (6.0%). Holt had the least (1.2%).

The authors conclude that mainstream curricular materials do not reflect the pride of place given to reasoning and proof in the education research literature and in curricular mandates.

“Expert and Novice Approaches to Reading Mathematical Proofs”, by Matthew Inglis and Lara Alcock, published in the Journal for Research in Mathematics Education, Vol. 43 No. 4, July 2012, pp. 358-390

The authors had groups of undergraduates and research mathematicians read several short, student-work-typed proofs of elementary theorems, and decide if the proofs were valid. They taped the participants’ eye movements to see where their attention was directed.

They found:

* The mathematicians did not have uniform agreement on the validity of the proofs. Some of the proofs had a clear mistake and then the mathematicians did agree, but others were more ambiguous. (The proofs that were used are in an appendix in the article so you can have a look for yourself if you have JSTOR or whatever.) The authors are interested in using this result to challenge the conventional wisdom that mathematicians have a strong shared standard for judging proofs. I am sympathetic to the project of recognizing the way that proof reading depends on context, but found this argument a little irritating. The proofs used by the authors look like student work: the sequence of ideas isn’t being communicated clearly. So it wasn’t the validity of a sequence of ideas that the participants evaluated, it was also the success of an imperfect attempt to communicate that sequence. Maybe this distinction is ultimately unsupportable, but I think it has to be acknowledged in order to give the idea that mathematicians have high levels of agreement about proofs its due. Nobody who espouses this really thinks that mathematicians are likely to agree on what counts as clear communication. Somehow the sequence of ideas has to be separated from the attempt to communicate it if this idea is to be legitimately tested.

* The undergraduates spent a higher percentage of the time looking at the formulas in the proofs and a lower percentage of time looking at the text, as compared with the mathematicians. The authors argue that this is not fully explained by the hypothesis that the students had more trouble processing the formulas, since the undergrads spent only slightly more time total on them. The mathematicians spent substantially more time on the text. The authors speculate that the students were not paying as much attention to the logic of the arguments, and that this pattern accounts for some of the notorious difficulty that students have in determining the validity of proofs.

* The mathematicians moved their focus back and forth between consecutive lines of the proofs more frequently than the undergrads did. The authors suggest that the mathematicians were doing this to try to infer the “implicit warrant” that justified the 2nd line from the 1st.

The authors are also interested in arguing that mathematicians’ introspective descriptions of their proof-validation behavior are not reliable. Their evidence is that previous research (Weber, 2008: “How mathematicians determine if an argument is a valid proof”, JRME 39, pp. 431-459) based on introspective descriptions of mathematicians found that mathematicians begin by reading quickly through a proof to get the overall structure, before going into the details; however, none of the mathematicians in the present study did this according to their eye data. One of them stated that she does this in her informal debrief after the study, but her eye data didn’t indicate that she did it here. Again I’m sympathetic to the project of shaking up conventional wisdom, and there is lots of research in other fields to suggest that experts are not generally expert at describing their expert behavior, and I think it’s great when we (mathematicians or anyone else) have it pointed out to us that we aren’t right about everything. But I don’t feel the authors have quite got the smoking gun they claim to have. As they acknowledge in the study, the proofs they used are all really short. These aren’t the proofs to test the quick-read-thru hypothesis on.

The authors conclude by suggesting that when attempting to teach students how to read proofs, it might be useful to explicitly teach them to mimic the major difference found between novices and experts in the study: in particular, the idea is to teach them to ask themselves if a “warrant” is required to get from one line to the next, to try to come up with one if it is, and then to evaluate it. This idea seems interesting to me, especially in any class where students are expected to read a text containing proofs. (The authors are also calling for research that tests the efficacy of this idea.)

The authors also suggest ways that proof-writing could be changed to make it easier for non-experts to determine validity. They suggest (a) reducing the amount of symbolism to prevent students being distracted by it, and (b) making the between-line warrants more explicit. These ideas strike me as ridiculous. Texts already differ dramatically with respect to (a) and (b), there is no systemic platform from which to influence proof-writing anyway, and in any case as the authors rightly note, there are also costs to both, so the sweet spot in terms of text / symbolism balance isn’t at all clear and neither is the implicit / explicit balance. Maybe I’m being mean.

Dispatches from the Learning Lab: Partial Understanding

So here’s another one that I suppose is kind of obvious, but nonetheless feels like big, important news to me:

It’s possible to only partly understand what somebody else is saying.

Let me be more specific. When you’re explaining something to me, it’s possible for me to get some idea from it in a clear way, to the point where my understanding registers on my face, but nonetheless the other 7 ideas you were describing I have no idea what you’re talking about.

<Example>

I am a 9th grader in your Algebra I class. You’re teaching me about linear functions. You are explaining to the class how to find the y-intercept of a linear function, in slope-intercept form, given that the slope is 4 and the point (6,11) lies on the line. You explain that the equation has the form y=mx+b and that because we know the point (6,11) is on the line, that this point satisfies the equation. Thus you write

11=4\cdot 6+b

on the board. At this point I recognize that we are trying to find b and that we have an easy single-variable linear equation to solve. My face lights up and you take mental note of my engagement. Maybe you even ask for the y-intercept, and since I recognize that this must be b I calculate 11-24 = -13 and raise my hand.

Meanwhile, I have only the vaguest sense of the meaning of the phrase “y-intercept.” I have literally no understanding of why I should expect the equation to have the form y=mx+b. I have a nagging feeling of dissatisfaction ever since you substituted (6,11) into the equation because I thought x and y were supposed to be the variables but now it looks like b is the variable. Most importantly, I do not understand that the presence of the point on the line implies that its coordinates satisfy the equation of the line and conversely, because on a very basic level I don’t understand what the graph of the function is a picture of. This has been bothering me ever since we started the unit, when you had me plug in a bunch of x values into some equations and obtain corresponding y values, graph them, and then draw a solid line connecting the three or four points. Why am I drawing these lines? What are they pictures of?

Occasionally, I’ve asked a question aimed at getting clarity on some of these basic points. “How did you know to put the 6 and 11 into the equation?” But because I can’t be articulate about what I don’t understand, since I don’t understand it, and you can’t hear what I’m missing in my questions because the the theory is complete and whole in your mind, these attempts come to the same unsatisfying conclusion every time. You explain again; I frown; you explain a different way; I say, “I don’t understand.” You, I, and everyone else grow uncomfortable as the impasse continues. Eventually, you offer some thought that has something in it for me to latch onto, just as I latched onto solving for b before. Just to dispel the tension and let you get on with your job, I say, “Ah! Yes, I understand.”

</Example>

This example is my attempt to translate a few experiences I’ve had this semester into the setting of high school. The behavior of the student in that last paragraph was typical of me in these situations, though it would be atypical from a high school student, drawing as it does on the resources of my adulthood and educator background to self-advocate, to tolerate awkwardness, even to be aware that my understanding was incomplete. Still, often enough I ended up copping out as the student does above, understanding one of the 8 things that were going on, and latching onto it just so I could allow myself, the teacher and the class to move on gracefully. Conversations with other students indicated that my sense of incomplete understanding was entirely typical, even if my self-advocacy was not.

The take-home lesson is two-fold. Point one is about the limitations of explaining as a method of teaching. Point two is about the limitations of trusting your students’ (verbal or implied) response to your (verbal or implied) question, Do you understand?

The basic answer (as you can tell from the example) is, No, I don’t.

Now I myself love explaining and have done a great deal of it as a teacher. I fancy myself an extremely clear and articulate explainer. But it couldn’t be more abundantly clear, from this side of the desk, how limited is the experience of being explained to. I mean, actually it’s a great, key, important way to learn, but only in small doses and when I’m ready for it, when the groundwork for what you have to say has been properly set.

I am somewhat chastened by this. I am thinking back self-consciously to times when I’ve explained my students’ ears off rather than, in the immortal words of Shawn Cornally, “lay off and let them fucking think for a second.” It’s like I was too taken with the clarity and beauty of the formulation I was offering, or in too much of a hurry to let them work through what they had to work through, or in all likelihood both, to see that more words weren’t going to do any good. Beyond this, I’m thinking back on the faith I’ve put in my ability to read students’ level of understanding from their faces. I maintain that I’m way better at this than my professors, but I don’t think I’ve had enough respect for how you can understand a small part of something and have that feel like a big enough deal to say, and mean, “Oh I get it.” Or to understand a tiny part of something and use that as cover for not understanding the rest.