Wherein This Blog Serves Its Original Function

The original inspiration for starting this blog was the following:

I read research articles and other writing on math education (and education more generally) when I can. I had been fantasizing (back in fall 2009) about keeping an annotated bibliography of articles I read, to defeat the feeling that I couldn’t remember what was in them a few months later. However, this is one of those virtuous side projects that I never seemed to get to. I had also met Kate Nowak and Jesse Johnson at a conference that summer, and due to Kate’s inspiration, Jesse had started blogging. The two ideas came together and clicked: I could keep my annotated bibliography as a blog, and then it would be more exciting and motivating.

That’s how I started, but while I’ve occasionally engaged in lengthy explication and analysis of a single piece of writing, this blog has never really been an annotated bibliography. EXCEPT FOR RIGHT THIS VERY SECOND. HA! Take THAT, Mr. Things-Never-Go-According-To-Plan Monster!

“Opportunities to Learn Reasoning and Proof in High School Mathematics Textbooks”, by Denisse R. Thompson, Sharon L. Senk, and Gwendolyn J. Johnson, published in the Journal for Research in Mathematics Education, Vol. 43 No. 3, May 2012, pp. 253-295

The authors looked at HS level textbooks from six series (Key Curriculum Press; Core Plus; UCSMP; and divisions of the major publishers Holt, Glencoe, and Prentice-Hall) and analyzed the lessons and problem sets from the point of view of “what are the opportunities to learn about proof?” To keep the project manageable they just looked at Alg. 1, Alg. 2 and Precalc books and focused on the lessons on exponents, logarithms and polynomials.

They cast the net wide, looking for any “proof-related reasoning,” not just actual proofs. For lessons, they were looking for any justification of stated results: either an actual proof, or a specific example that illustrated the method of the general argument, or an opportunity for students to fill in the argument. For exercise sets, they looked at problems that asked students to make or investigate a conjecture or evaluate an argument or find a mistake in an argument in addition to asking students to actually develop an argument.

In spite of this wide net, they found that:

* In the exposition, proof-related reasoning is common but lack of justification is equally common: across the textbook series, 40% of the mathematical assertions about the chosen topics were made without any form of justification;

* In the exercises, proof-related reasoning was exceedingly rare: across the textbook series, less than 6% of exercises involved any proof-related reasoning. Only 3% involved actually making or evaluating an argument.

* Core Plus had the greatest percentage of exercises with opportunities for students to develop an argument (7.5%), and also to engage in proof-related reasoning more generally (14.7%). Glencoe had the least (1.7% and 3.5% respectively). Key Curriculum Press had the greatest percentage of exercises with opportunities for students to make a conjecture (6.0%). Holt had the least (1.2%).

The authors conclude that mainstream curricular materials do not reflect the pride of place given to reasoning and proof in the education research literature and in curricular mandates.

“Expert and Novice Approaches to Reading Mathematical Proofs”, by Matthew Inglis and Lara Alcock, published in the Journal for Research in Mathematics Education, Vol. 43 No. 4, July 2012, pp. 358-390

The authors had groups of undergraduates and research mathematicians read several short, student-work-typed proofs of elementary theorems, and decide if the proofs were valid. They taped the participants’ eye movements to see where their attention was directed.

They found:

* The mathematicians did not have uniform agreement on the validity of the proofs. Some of the proofs had a clear mistake and then the mathematicians did agree, but others were more ambiguous. (The proofs that were used are in an appendix in the article so you can have a look for yourself if you have JSTOR or whatever.) The authors are interested in using this result to challenge the conventional wisdom that mathematicians have a strong shared standard for judging proofs. I am sympathetic to the project of recognizing the way that proof reading depends on context, but found this argument a little irritating. The proofs used by the authors look like student work: the sequence of ideas isn’t being communicated clearly. So it wasn’t the validity of a sequence of ideas that the participants evaluated, it was also the success of an imperfect attempt to communicate that sequence. Maybe this distinction is ultimately unsupportable, but I think it has to be acknowledged in order to give the idea that mathematicians have high levels of agreement about proofs its due. Nobody who espouses this really thinks that mathematicians are likely to agree on what counts as clear communication. Somehow the sequence of ideas has to be separated from the attempt to communicate it if this idea is to be legitimately tested.

* The undergraduates spent a higher percentage of the time looking at the formulas in the proofs and a lower percentage of time looking at the text, as compared with the mathematicians. The authors argue that this is not fully explained by the hypothesis that the students had more trouble processing the formulas, since the undergrads spent only slightly more time total on them. The mathematicians spent substantially more time on the text. The authors speculate that the students were not paying as much attention to the logic of the arguments, and that this pattern accounts for some of the notorious difficulty that students have in determining the validity of proofs.

* The mathematicians moved their focus back and forth between consecutive lines of the proofs more frequently than the undergrads did. The authors suggest that the mathematicians were doing this to try to infer the “implicit warrant” that justified the 2nd line from the 1st.

The authors are also interested in arguing that mathematicians’ introspective descriptions of their proof-validation behavior are not reliable. Their evidence is that previous research (Weber, 2008: “How mathematicians determine if an argument is a valid proof”, JRME 39, pp. 431-459) based on introspective descriptions of mathematicians found that mathematicians begin by reading quickly through a proof to get the overall structure, before going into the details; however, none of the mathematicians in the present study did this according to their eye data. One of them stated that she does this in her informal debrief after the study, but her eye data didn’t indicate that she did it here. Again I’m sympathetic to the project of shaking up conventional wisdom, and there is lots of research in other fields to suggest that experts are not generally expert at describing their expert behavior, and I think it’s great when we (mathematicians or anyone else) have it pointed out to us that we aren’t right about everything. But I don’t feel the authors have quite got the smoking gun they claim to have. As they acknowledge in the study, the proofs they used are all really short. These aren’t the proofs to test the quick-read-thru hypothesis on.

The authors conclude by suggesting that when attempting to teach students how to read proofs, it might be useful to explicitly teach them to mimic the major difference found between novices and experts in the study: in particular, the idea is to teach them to ask themselves if a “warrant” is required to get from one line to the next, to try to come up with one if it is, and then to evaluate it. This idea seems interesting to me, especially in any class where students are expected to read a text containing proofs. (The authors are also calling for research that tests the efficacy of this idea.)

The authors also suggest ways that proof-writing could be changed to make it easier for non-experts to determine validity. They suggest (a) reducing the amount of symbolism to prevent students being distracted by it, and (b) making the between-line warrants more explicit. These ideas strike me as ridiculous. Texts already differ dramatically with respect to (a) and (b), there is no systemic platform from which to influence proof-writing anyway, and in any case as the authors rightly note, there are also costs to both, so the sweet spot in terms of text / symbolism balance isn’t at all clear and neither is the implicit / explicit balance. Maybe I’m being mean.

3 thoughts on “Wherein This Blog Serves Its Original Function

  1. I would think that “reducing the amount of symbolism to prevent students being distracted by it” would be a good idea, especially at the high school level. What students fail to learn in school is how to make a valid and thoughtful mathematical argument, which is the essence of proof-writing. I think the formalism and details can be a distraction from making the distinction between procedural thinking and mathematical reasoning, which I believe is the most important part about proof writing. Eventually the details become important, but why not start with building informal reasoning and justification skills?

    Thanks for your thoughts, this is a great article!

    1. Agreed. My annoyance with the article for this suggestion was probably ungenerous anyway; but what I had in mind was (a) increasing the amount of textual (as opposed to symbolic) explanation will be good for some kids and bad for others, e.g. it can present additional barriers for English language learners, and (b) it’s not like we have a Ministry of Textbook Style that could implement a suggestion about how proofs ought to be written in textbooks. That said, obv. there would be benefits. A propos of the other article reviewed here, a more important job for such a ministry if it existed would be make sure that there are proofs in the first place, and to give students opportunities to practice creating and evaluating them.

      Anyway, thanks for your engagement!

Leave a comment