N Is the Smallest Number That… Wednesday, Jan 20 2010 

Just a fun thought for a quick activity (ideally, it would be a routine, for example a weekly one) aimed at (a) cultivating an appreciation for numbers’ distinct personalities; (b) promoting the idea that a pattern holding for small cases doesn’t necessarily keep holding; and (c) giving kids an opportunity to be creative with math. It just occurred to me, and I don’t have a class to try it out on, so I can’t give you any implementation feedback. Here’s the idea:

n is the smallest number that…

Keep track of what number week it is since the beginning of the year. So this could be week 3. Then, each week, as like a bonus-challenge type of a thing, put the problem to all students to think of a property of this number not shared by any smaller natural number. Publicly compile what people find at the end of the week. This activity gets more interesting, and harder, the higher the number.

Here is a brainstorm, so you know what I mean; although many of these are not the kinds of things a kid would come up with.

3 – smallest odd prime; smallest nontrivial triangular number; smallest number of points needed to prevent somebody else from drawing a line thru all of them
4 – smallest composite; smallest nontrivial square; smallest nontrivial sum of triangular numbers; smallest number that is exceeded by its number of partitions
5 – smallest nontrivial sum of squares (4+1); a pentagon is the fewest-sided polygon in which you can inscribe a star
6 – smallest number with 2 distinct nontrivial factors; smallest composite with an even number of factors; smallest nontrivial perfect number
7 – smallest prime for which 2p+1 isn’t also prime (i.e. smallest prime that’s not a Sophie Germain prime); 1/7 is the first fraction in the sequence 1/n for which the cycle length of the decimal expansion actually reaches the theoretical maximum of n-1
8 – smallest nontrivial cube; smallest sum of distinct odd primes
9 – smallest odd composite
10 – smallest two digit number
11 – smallest nontrivial palindrome
12 – smallest number exceeded by the sum of its proper factors; smallest number that is not a prime power but is still divisible by a (nontrivial) square
…and I’m already having trouble thinking of something for 13.

Variations would be “n is the largest number that …” and “n is the only number that…” I’m imagining with high school, to keep it from getting too hard, that the challenge could be to find any one of the three. Another variant to make it easier would just be “n is the number of…” (the object filling in the blank can’t be defined in terms of n; so “3 is the number of vertices of a triangle” isn’t a legit answer, but “8 is the number of vertices of a cube” is).

5:35pm addendum:

Actually another way to make it more accessible would be to make it into a project rather than a routine: for as many of the numbers 1-100 as possible, find either a) an interesting property that it but no smaller natural number has; b) a property that it but no greater natural number has; c) a property that it alone has; or d) some interesting object or set counted by that number.

That way, in order to participate, you’re not forced to find something for every single number. Some people will find something for just 15 numbers, others for 40, maybe a handful of others for 80 and those kids will get really intent on the ones they haven’t found yet.

Another cool thing I could see coming of this is a real awakening of the idea of proof, as people try to decide if a number really is the greatest or only number with a given property. For example, while making the list above I noticed for the first time that 9 is the square of the number of factors it has, and wondered if there were any other numbers like this. Trivially, 1 as well; but I’ve just about convinced myself that that’s it; there are no more. How would a student justify this claim?

Favorite Theorems 11 and 12 Monday, Jan 18 2010 

Happy MLK day everyone!

An exchange with jd2718 has forged an unexpected connection between my last two posts, reminding me to add two biggies to my list of favorite theorems:

XI. The multiplication algorithm.
XII. The division algorithm.

“Ben, those are algorithms, not theorems.”

Nah, yo, EVERY ALGORITHM FOR A COMMON TASK is (signals the presence of, necessitates) a theorem. Namely, that the algorithm accomplishes the task.

In this case both theorems rest on the distributive property and the nature of place value. When we teach the algorithms, let’s treat them with due respect as theorems – i.e. worthy of being excited about, honored, and justified.

*****
How do you all seem to keep abreast of everyone so effectively in addition to writing? I spent like 7 hours on Friday writing my last post and probably another 6 yesterday just reading a handful of blogs, and following links they sent me to. Does Google Reader contain a magic sauce?

*jd2718 links to what looks to be a very thought-provoking issue of American Educator. Just browsed it myself so far – I was trying to avoid having the internet eat me.

*WCYDWT strikes again.

*Sam Shah implements a kick-*ss homework accountability structure that doubles as basic training in staying organized. I’ve done something similar (though somewhat less ambitious) with good results too.

*****
In honor of MLK day: a shout-out to the continuing awesome work of The Algebra Project.

*****
I managed to write a short post!

Estimation Strategies Saturday, Jan 16 2010 

It originally occurred to me to start this blog when 2 things connected in my mind: one is that I met the everyone-knows-she’s-a-rockstar Kate Nowak at a conference over the summer, along with the also-a-rockstar-and-looking-forward-to-a-time-when-everyone-knows-this Jesse Johnson, and they met each other, which inspired Jesse to start her blog, and got me thinking that might be cool. The other is that I was independently thinking hard about forcing myself to create an annotated bibliography on math education research, to force myself to process what I read. Which seemed like a worthwhile but totally exhausting and daunting task. When it finally occurred to me to do the bibliography as a blog, it was too obvious; I had to.

That said, I kind of haven’t really done it yet! I imagined myself forcing myself to read something every week so I could post on it but so far, I’ve only written about stuff I’d already read before I started blogging! (To be fair, I did read most of the original book by Oskar Pfungst for the first time for the Clever Hans post, and every post has caused me to reread at least key parts of whatever I was writing about.) So in a way, I’m beginning from scratch with this one. I hope you stay interested ;)

“Strategy Use and Estimation Ability of College Students”
Deborah Levine, Journal for Research in Mathematics Education, 1982, Vol. 13, No. 5, pp. 350-359

“Computational Estimation Strategies of Professional Mathematicians”
Ann Dowker, Journal for Research in Mathematics Education, 1992, Vol. 23, No. 1, pp. 45-55

Both of these articles are available from JSTOR but I can’t seem to find them for free on the Internet.

Bottom line: When asked to estimate the answers of semi-difficult multiplication and division problems, mathematicians were very successful, used a wide variety of strategies tailored to the different problems, and often used different strategies when posed the same problem again months later. They made very little use of standard algorithms. When college non-math majors were given the same task, they were much less successful, and were much more likely to use standard algorithms. Also, the ones who were least successful tended to be the ones who adhered most to the use of standard algorithms.

Lesson for educators: unclear. Food for thought, though, for sure. (More at the very end.)

Details:

In 1982, Deborah Levine took 89 non-math majors at a New York City college, gave them 10 multiplication and 10 division problems, and asked them to estimate the answers, and to think aloud as they did so. Here are the problems:
76×89; 93×18; 145×37; 824×26; 187.5×0.06; 482×51.2; 64.6×0.16; 424×0.76; 12.6×11.4; 0.47×0.26;
9,208÷32; 4,645÷18; 7,858÷51; 25,410÷65; 648.9÷22.4; 546÷33.5; 1,292.8÷71.2; 66÷0.86; 943÷0.48; 0.76÷0.89

In addition to the 10 multiplication and 10 division problems, which were created expressly for the study with some care, piloting and refinement, she also gave the students another separate test called the School and College Ability Test (SCAT) quantitative subtest which appears to be a standardized test used – possibly produced? – by the Center for Talented Youth. Levine refers to the results of this test as the students’ “quantitative ability.” (Her purpose was to control for this variable so she could isolate relationships between estimation strategies and estimation success that were “independent of quantitative ability.” She didn’t find any.) She didn’t provide any details about this second test so I have no idea what it actually measures. Consequently I have put the phrase “quantitative ability” in scare quotes throughout. In any case, results on this test were strongly correlated with success on the estimation task. I wish she had skipped this whole bit and just analyzed the estimation data.

She then rated the students’ answers to the estimation task by accuracy and categorized the strategies they used. Then she asked:

1) Are some strategies more commonly used than others?
2) Is there a relationship between the students’ “quantitative ability” and the type of strategies they used?
3) Is there a relationship between the students’ “quantitative ability” and the number of strategies they used?

She found that yes, yes, and yes. She had 8 strategy categories which she called “fractions,” “exponents,” “rounding both numbers,” “rounding 1 number,” “powers of 10,” “known numbers,” “incomplete partial products/quotients,” and “proceeding algorithmically.” She found that two categories – “proceeding algorthically” and “rounding both numbers” – accounted for 61% of all responses. “Proceeding algorithmically” alone accounted for 34%. On any given task, the students who used “proceeding algorithmically” strategies had the poorest “quantitative ability” scores, especially when compared with students who used “fractions” strategies. Also, students with lower “quantitative ability” scores used fewer strategies overall.

Levine also asked:

4) Is there variation in the success of different estimation strategies that is not accounted for by variation in “quantitative ability?”

And found that no, not especially.

In 1992 Ann Dowker gave the exact same estimation task to 44 pure mathematicians (ranging from 3 graduate students on the verge of their PhDs to 7 members of prestigious professional organizations like the Royal Society). She also rated their responses by accuracy and categorized them by strategy. (Her categorization, also into 8 categories, was based on Levine’s but slightly different, reflecting the presence of strategies used frequently by the mathematicians but not by the college students.) She gave 18 of them the exact same task again six to nine months later. She found that:

1) The mathematicians did way, way better than Levine’s college kids. (No shocker.) More interestingly:
2) They displayed a striking variability in their strategies. There was no problem for which the mathematicians, taken as a whole, used fewer than 7 distinguishable strategies, spanning 3 of the eight strategy categories; and in nine of the twenty problems, the mathematicians used at least 16 distinguishable strategies, spanning at least 6 of the eight categories. On every single problem, at least one mathematician used a strategy that none of the other mathematicians used (and on all but one problem, at least four did).
3) The mathematicians who were given the task a second time did not especially do the problems the same way they had the first time.
4) In stark contrast with Levine’s data, the mathematicians almost never used an algorithmic approach. (Specifically, they used such an approach 4%, as opposed to Levine’s 34%, of the time.) They used “fractions” approaches 40% of the time, and “rounding both numbers,” and “known numbers,” 15% of the time each.
5) The mathematicians seemed to be guided by aesthetic considerations while solving the problems.

#5 is illustrated by this awesome anecdote from Dowker’s paper (p. 53). One of the mathematicians was estimating 1292.8÷71.2:

He said, "Divide by 4; 323.2÷17.8. That's 32x10.1÷(72÷4). [Pause] I don't like not being able to do something with the 323.2!" He then solved the problem successfully by rounding both numbers to 1300÷70 and estimating 18, but still seemed disappointed at not having managed to use the number 323.2

Thoughts:

I imagine you could be thinking about many things right now. Here’s what I’m thinking about:

1) Multiplication and division are a lot of fun if you’ve got access to a rich set of number relationships to approach them with. My favorites among Levine’s estimating problems are 12.6×11.4, 64.6×0.16, and 4645÷18. (Guess why?) Looking for ways into the problems, chosen around the particular details of the numbers involved, is a fun, creative activity. A goal of elementary work in multiplication and division should be the cultivation of this sense of creativity.

2) Knowing how to execute the standard algorithms is really a paltry shell next to what’s possible. It cannot be regarded as the primary goal of elementary-level work on multiplication and division. In fact, a student’s reliance on the algorithms as their only (or even their primary) method is a sign that something’s wrong. It strikes me as more or less precisely like reliance on a calculator. Note: I am not saying the algorithms shouldn’t be taught. I love them. For doing large-number computations they may have been rendered somewhat passe by the calculator, but they are still deeply relevant as sources of insight about numbers and operations. For example, the long division algorithm is used to prove that every rational number has a repeating or terminating decimal expansion.

Favorite Theorems Saturday, Jan 9 2010 

Happy New Year y’all! I’m looking forward to writing some real posts soon – this one’s just for fun. (Maybe they all are?) My first effort at catching up on my blog reading was to read the last eight posts from f(t). Kate gives us a post on her favorite theorem, including the proof. It turns out to be Cantor’s diagonal proof of the nondenumerability of the reals. I really enjoyed it and got excited to catalogue some of mine.

Here’s a top-10 list. It’s a little capricious. But I’ll stand behind each of these theorems being amazing. They span a lot of levels, from early elementary school thru graduate level stuff. Partly, especially for the lower-level ones, I’m making a case that these guys deserve more respect than they typically get. Below the list I put some comments on each. I got kinda technical toward the end, and I feel a bit self-conscious about that. I’ll be excited if you read any of this but definitely feel free to stop, and then give me sh*t about it, when I get too abstruse.

I. The commutativity of multiplication. (I’m not kidding. I think this is dope. More below.)
II. a÷b (division) = a/b (fraction)
III. The Pythagorean theorem.
IV. Every rational number has a terminating or repeating decimal expansion.
V. If a is a root of a polynomial p(x), then x-a divides p(x).
VI. The fundamental theorem of calculus.
VII. Every prime of the form 4n+1 is a sum of squares in exactly one way.
VIII. Every element of SOn is a composition of rotations in orthogonal planes.
IX. A function C->C that is analytic in a neighborhood has a convergent Taylor expansion equal to itself in that neighborhood.
X. The fundamental theorem of Galois theory.

I. The commutativity of multiplication
Most students and teachers I know treat this as so trivial and obvious that I must be some sort of ridiculous person to dignify it with the word “theorem.” But seriously now. When they told you what 5×7 was, depending slightly on your teacher, it was something like “five 7’s.” (This is the language used by my first grade teacher Judy Lazrus. She was amazing, but this isn’t especially an example of that.) 7+7+7+7+7. That means 7×5 is 5+5+5+5+5+5+5. Is it obvious that these are going to come out the same?

NO, it’s NOT. Not even to me, and I’ve had a lot of time to get used to it. Certainly not to a six year old. There really is a theorem in here. If you’ve worked with kids who are first learning about multiplication, you know this.

A formal, utterly rigorous proof would begin with something like the Peano axioms and build up addition and then multiplication in a formal way from scratch. This is cool, but I have a different proof in mind. It’s less formal; less “rigorous” by professional standards; however, it’s the one that gives this theorem such import to me, it’s totally convincing, and it’s accessible to a six-year-old. You probably already know it; perhaps it’s how you convinced yourself multiplication really is commutative way back when; but I think it deserves special attention.

Represent 5×7 as an array – five rows of 7 dots:
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
Now interchange the roles of the rows and columns. Instead of seeing the array as a stack of rows, one on top of the other, see it as a line of columns, side by side. Each column is formed with one dot from each row, so the size of each column equals the number of rows, while the number of dots in each row becomes the number of columns. 5 rows of 7 becomes 7 columns of 5. So 5×7 = 7×5. The same argument could be used with any pair of natural numbers: mxn = nxm.

Why am I making a big deal of this? Two reasons. One is that learning multiplication’s commutativity is something that happens very early in a typical math education, and this is a chance for students who are just beginning their mathematical journey to have a taste of real mathematics – an elegant, powerful argument giving a totally convincing proof of an unexpected result. The other is that the particular method of proof here (interchanging rows and columns) is one with great power, whose usefulness extends well beyond high school. For example, this is the idea behind the proof of Burnside’s formula in group theory, or the proof in number theory that the number of partitions of n with at most k parts equals the number with each part at most k, and many other advanced results. I love the idea of putting an idea of this vast power and scope into the hands of tiny children.

II. a÷b (division) = a/b (fraction)
I can’t back this one up quite as well – I don’t even know how to state this claim in precise mathematical language, and I can’t give you anything I would consider a proof. But I know there’s something surprising in here because I’ve watched a lot of kids be surprised by it.

When I learned division in school, I learned that 10÷2 means the number of 2’s that are needed to add up to 10. 10/2 means what you get if you accumulate ten halves. These are definitely not the same thing. In the former case, I am imagining stacking up 2’s till I get to 10. The answer, five, is counting boxes of 2. I’m thinking about whole things, and the 5 is counting boxes of more than 1 whole thing. There’s certainly no fraction anywhere in sight – everything is whole. In the latter, the 5 is constructed from gluing together 10 halves. When I imagine it I can still see the glue. Such a difference in visual images – why is the number coming out the same?

I think there are a lot of ways to get convinced, and unlike my first theorem I’m not attached to a specific argument. What I feel sure of is that as kids come to learn about division, and fractions, this identity deserves a lot of thought, and offers something to get excited about.

III. The Pythagorean theorem
You know it, y’all, this thing is amazing. I think the Pythagorean theorem is for many of us one of the great missed opportunities of our math education. Because most adults can recite it like a mantra but only a very few people (one is my former roommate Thierry – big ups to your middle school math teacher, Thierry!) recall it with any of the awe it deserves.

This is the earliest example I can think of in the curriculum of a really surprising connection between algebra and geometry. Somehow right-anglyness is precisely captured by sum-of-squaresiness and vice versa. Somehow “right triangle” is a geometrical picture, and “a^2+b^2=c^2″ an algebraic description, of precisely the same underlying mathematical structure. I think that’s amazing. I’m still not over it.

I have three favorite proofs:
a) Euclid’s. Not very accessible to the kiddies but has the virtue of actually partitioning the square on the hypotenuse into two parts equal in area to the squares on the legs.
b) The one where you put four copies of the triangle inside a square of side length a+b and if you arrange them one way the remaining area is c^2 and if you arrange them another way it’s a^2+b^2. Here’s a picture.
c) The one where you draw an altitude from the hypotenuse, like this, and then work from the similarity of the three triangles.

There are a zillion other proofs. There are books and books. One I’m very excited about, by Bob & Ellen Kaplan (who wrote Out of the Labyrinth, which I was also very excited about), is called Hidden Harmonies: The Lives and Times of the Pythagorean Theorem and is due out in August. Part of what I love about this theorem is the amazing diversity in the ways it is proven.

Okay now I have to speed through the rest:

IV. Every rational number has a terminating or repeating decimal expansion.
The awesome Larry Zimmerman says that a good math problem “has a future.” Meaning that the ideas in the problem have a reach that extends to other problems, possibly to other whole domains of mathematics. My excitement about the proof of commutativity of multiplication, above, has to do with this. Likewise, this theorem. First of all, it’s a handy characterization of rationals, and looks ahead in a natural way to interesting calculus-laden questions having to do with convergence of series, especially geometric series. But maybe even more important to me is the proof, which involves the division algorithm and the pigeonhole principle. Both of which spin off great showers of mathematical consequences…

V. If a is a root of a polynomial p(x), then x-a divides p(x).
One of which is this! The division algorithm generalizes in a natural way from the natural numbers…
b divided by a has a remainder less than a
…to polynomials (techically, polynomial rings over a field):
p(x) divided by q(x) has a remainder which is a polynomial of degree less than q.
Because of this, we get this awesome connection between the behavior of a polynomial as a function (for what x is p(x) equal to zero?) and its behavior as an object to be factored (what other polynomials divide p(x)?)

The converse is less startling to me – if x-a is a factor of p(x), then when x=a, x-a=0, so p(x)=0 at x=a. The interesting part is the result I stated – all we have to know is p(a)=0 to know x-a is a factor. And as I mentioned, this comes from the division algorithm:
p(x) / (x-a) has a remainder of degree lower than x-a – i.e. a constant.
So p(x) = (some polynomial)*(x-a) + const.
If p(a) = 0, then evaluating this equation at a gives
0 = (some polynomial, evaluated at a)*0 + const.
The constant must be 0, i.e. the remainder must be 0, i.e. x-a is a factor of p(x). Boom!

VI. The fundamental theorem of calculus
Oh my goodness. Don’t even get me started. How am I trying to do this quickly? If the pythagorean theorem felt like it was connecting disparate realms (the geometry of a right angle and the algebra of a^2+b^2=c^2), the fundamental theorem of calculus is connecting disparate planets. If f tells you about g’s speed, then g tells you about f’s… area? WHAT??? I need to move on because I’m going to wax poetical all night long if I don’t stop. This was the theorem that sealed the deal between me and math.

VII. Every prime of the form 4n+1 is a sum of squares in exactly one way.
VIII. Every element of SOn is a composition of rotations in orthogonal planes.
IX. A function C->C that is analytic in a neighborhood has a convergent Taylor expansion equal to itself in that neighborhood.

I have a thing for when you’re confused about some question about really familiar, concrete objects and you eventually answer it by considering it in the broader context of really fanciful, crazy objects. All three of these theorems represent for me the wierd power of complex numbers and other crazy abstractions to answer questions that are about real numbers. (In the first case, natural numbers.)

There are proofs of Fermat’s theorem about primes of the form 4n+1 being sums of squares that don’t involve complex numbers, but the proof I relate to the most certainly does. I just wrote it out but then I realized that it is kind of technical so I’m putting it at the end as an appendix. I don’t know if the theorem about SOn has proofs that don’t involve complex numbers but the proof I know certainly and powerfully does. A sketch of it is at the end too.

The Taylor expansion is a slightly different but related story for me. It used to confuse me that functions like sin x and e^x are equal to their Taylor expansion everywhere, but then there is that funky example: f(x)=e^-(1/x^2) if x isn’t 0 and f(x)=0 if x is 0
This function is continuous and differentiable for all real x. At x=0, it and all its derivatives are 0. So its Taylor expansion at 0 is identically 0. But the function itself is totally not 0 at all. What the hell?

Once again, broadening out into the complex plane explains everything. The function f(x) = e^-(1/x^2) isn’t defined at x=0. When you’re just looking at the real line, it looks like you can plug the hole with f(0)=0 and everything will be smooth and nice. But if you look at the whole complex plane, x=0 is an essential singularity of f. f’s behavior in the neighborhood of 0 is actually completely bananas. They fooled us by only showing us a tiny, well-behaved slice when they showed us the real line. If a function is really continuous and differentiable in a whole complex neighborhood, it will equal its Taylor expansion there. Whew!

X. The fundamental theorem of Galois theory
I’m not even going to state this one. I could use avoiding the technicalities as an excuse, but the real reason is that I’m avoiding spoilers. The F. T. of G. T. is the punchline of an amazing mathematical journey. The way I learned it kind of killed it. The teacher stated it, showed us examples, and then spent a few classes proving it. Then, it was applied to the problem it had originally come into being to deal with (the insolubility of the quintic). I had been told the answer before I had asked the question, as it were. I feel a sense of loss that I never got to see it shimmering in the distance as I searched. This spring I’m teaching an informal course on group theory, hopefully culminating in Galois theory, for some teacher friends. I’m gonna do my darndest to bring out the fundamental theorem in an organic, motivated and natural way. I’ll let you know how it goes.

Appendices:

Proof that an odd prime p is a sum of 2 squares if and only if p has the form 4n+1.

Let p be a prime. If p=a^2+b^2 then p factors in the ring of Gauss integers – it equals (a+bi)(a-bi). Likewise, the only way for p to factor in the ring of Gauss integers is (a+bi)(a-bi). (This is because if (a+bi) is a factor of p, say p=(a+bi)(c+di), then its conjugate (a-bi) is too, and (a+bi)*(a-bi) is a real factor of p^2, so it is 1, p or p^2. But if it were 1 or p^2, the factorization p=(a+bi)(c+di) would be trivial – in the first case, a+bi would be a unit, in the second case, c+di would. So if (a+bi) is really a nontrivial factor of p, then p=(a+bi)(a-bi).) So p is a sum of squares if and only if p stops being prime when you venture out into the ring of Gauss integers.

I.e. p is a sum of squares if and only if (p) is not a prime ideal of Z[x]/(x^2+1).
I.e. p is a sum of squares if and only if Z[x]/(x^2+1,p) is not an integral domain.
I.e. p is a sum of squares if and only if Fp[x]/(x^2+1) is not an integral domain.
I.e. p is a sum of squares if and only if x^2+1 factors in Fp[x].
I.e. p is a sum of squares if and only if -1 is a square in the field Fp.

Now the multiplicative group of Fp is cyclic and order p-1, and (if p>2) -1 has order 2 in this group, so if g is a generator for the group, -1=g^[(p-1)/2]. -1 is a square if and only if the exponent is even, i.e. if p-1 is a multiple of 4.

So if p is a prime >2, it is a sum of squares if and only if p-1 is a multiple of 4, in other words if p has the form 4n+1.

The uniqueness of the representation as a sum of two squares follows from the fact that Z[x]/(x^2+1) is a unique factorization domain, which comes from the fact that it has a Euclidean division algorithm (big ups to the division algorithm again!). Thus p only factors one way: p=a^2+b^2=(a+bi)(a-bi). If p=c^2+d^2 too, then p would =(c+di)(c-di) too, but this contradicts unique factorization.

Outline of proof that any element of SOn is a composition of rotations in orthogonal planes:
Take an element P of SOn and regard it as having complex entries, even though they’re really real. Then P is unitary, so by the spectral theorem for unitary operators, it has an orthonormal basis of eigenvectors (which may be complex). The eigenvalues all have absolute value 1, their product is 1, and the complex ones come in conjugate pairs because P’s entries, and hence the coefficients of its characteristic polynomial, are real. Consider the subspace A of C^n spanned by a pair of eigenvectors associated to a pair of conjugate eigenvalues. Then consider the subspace A’ of A consisting of strictly real vectors (i.e. A’ is A intersect R^n). A’ is a plane, and the restriction of the action of P to A’ is a rotation. (This step takes some doing.) Other similar subspaces constructed as A’ was are all orthogonal to it since the basis of eigenvectors is orthonormal. If there are real eigenvalues, they are all 1 or pairs of -1s since the product of all eigenvalues is det P=1. Every pair of -1s gives a rotation thru pi in yet a new orthogonal plane, and the 1’s represent fixed subspaces. Thus the net effect of P is a composition of orthogonal rotations.

One corollary is the amazing fact that the composition of two rotations in 3-space about completely different axes, is a rotation! This is a corollary because in 3-space there isn’t room for 2 orthogonal planes – that would require 4 dimensions. So every element of SO3 is a rotation in a single plane. Since SO3, as a group, is closed, composing 2 rotations must give a new element of SO3 which must also be a rotation.

Follow

Get every new post delivered to your Inbox.

Join 132 other followers