## February 13, 2005

### "...you just get used to them"

Posted by shonk at 04:37 AM in Geek Talk | TrackBack“Young man, in mathematics you don’t understand things, you just get used to them.” —John von Neumann

^{1}

This, in a sense, is at the heart of why mathematics is so hard. Math is all about abstraction, about generalizing the stuff you can get a sense of to apply to crazy situations about which you otherwise have no insight whatsoever. Take, for example, one way of understanding the manifold structure on *SO(3)*, the special orthogonal group on 3-space. In order to explain what I’m talking about, I’ll have to give several definitions and explanations and each, to a greater or lesser extent, illustrates both my point about abstraction and von Neumann’s point about getting used to things.

First off, *SO(3)* has a purely algebraic definition as the set of all real (that is to say, the entries are real numbers) 3 × 3 matrices *A* with the property *A ^{T}A = I* and the determinant of

*A*is 1. That is, if you take

*A*and flip rows and columns, you get the transpose of

*A*, denoted

*A*; if you then multiply this transpose by

^{T}*A*, you get the identity matrix

*I*. The determinant has its own complicated algebraic definition (the unique alternating, multilinear functional…), but it’s easy to compute for small matrices and can be intuitively understood as a measure of how much the matrix “stretches” vectors. Now, as with all algebraic definitions, this is a bit abstruse; also, as is unfortunately all too common in mathematics, I’ve presented all the material slightly backwards.

This is natural, because it seems obvious that the first thing to do in any explication is to define what you’re talking about, but, in reality, the best thing to do in almost every case is to first explain what the things you’re talking about (in this case, special orthogonal matrices) *really are* and why we should care about them, and only then give the technical definition. In this case, special orthogonal matrices are “really” the set of all rotations of plain ol’ 3 dimensional space that leave the origin fixed (another way to think of this is as the set of linear transformations that preserve length and orientation; if I apply a special orthogonal transformation to you, you’ll still be the same height and width and you won’t have been flipped into a “mirror image”). Obviously, this is a handy thing to have a grasp on and this is why we care about special orthogonal matrices. In order to deal with such things rigorously it’s important to have the algebraic definition, but as far as *understanding* goes, you need to have the picture of rotations of 3 space in your head.

Okay, so I’ve explained part of the sentence in the first paragraph where I started throwing around arcane terminology, but there’s a bit more to clear up; specifically, what the hell is a “manifold”, anyway? Well, in this case I’m talking about differentiable (as opposed to topological) manifolds, but I don’t imagine that explanation helps. In order to understand what a manifold is, it’s very important to have the right picture in your head, because the technical definition is about ten times worse than the special orthogonal definition, but the basic idea is probably even simpler. The intuitive picture is that of a smooth surface. For example, the surface of a sphere is a nice 2-dimensional manifold. So is the surface of a donut, or a saddle, or an idealized version of the rolling hills of your favorite pastoral scene. Slightly more abstractly, think of a rubber sheet stretched and twisted into any configuration you like so long as there are no holes, tears, creases, black holes or sharp corners.

In order to rigorize this idea, the important thing to notice about all these surfaces is that, if you’re a small enough ant living on one of these surfaces, it looks indistinguishable from a flat plane. This is something we can all immediately understand, given that we live on an oblate spheroid that, because it’s so much bigger than we are, looks flat to us. In fact, this is very nearly the precise definition of a manifold, which basically says that a manifold is a topological space (read: set of points with some important, but largely technical, properties) where, at any point in the space, there is some neighborhood that looks identical to “flat” euclidean space; a 2-dimensional manifold is one that looks locally like a plane, a 3-dimensional manifold is one that looks locally like normal 3-dimensional space, a 4-dimensional manifold is one that looks locally like normal 4-dimensional space, and so on.

In fact, these spaces look so much like normal space that we can do calculus on them, which is why the subject concerned with manifolds is called “differential geometry”. Again, the reason why we would want to do calculus on spaces that look a lot like normal space but aren’t is obvious: if we live on a sphere (as we basically do), we’d like to be able to figure out how to, e.g., minimize our distance travelled (and, thereby, fuel consumed and time spent in transit) when flying from Denver to London, which is the sort of thing for which calculus is an excellent tool that gives good answers; unfortunately, since the Earth isn’t flat, we can’t use regular old freshman calculus.^{2} As it turns out, there are all kinds of applications of this stuff, from relatively simple engineering to theoretical physics.

So, anyway, the point is that manifolds look, at least locally, like plain vanilla euclidean space. Of course, even the notion of “plain vanilla euclidean space” is an abstraction beyond what we can really visualize for dimensions higher than three, but this is exactly the sort of thing von Neumann was talking about: you can’t really visualize 10 dimensional space, but you “know” that it looks pretty much like regular 3 dimensional space with 7 more axes thrown in at, to quote Douglas Adams, “right angles to reality”.

Okay, so the claim is that *SO(3)*, our set of special orthogonal matrices, is a 3-dimensional manifold. On the face of it, it might be surprising that the set of rotations of three space should itself look anything like three space. On the other hand, this sort of makes sense: consider a single vector (say of unit length, though it doesn’t really matter) based at the origin and then apply *every* possible rotation to it. This will give us a set of vectors based at the origin, all of length 1 and pointing any which way you please. In fact, if you look just at the heads of all the vectors, you’re just talking about a sphere of radius 1 centered at the origin. So, in a sense, the special orthognal matrices look like a sphere. This is both right and wrong; the special orthogonal matrices *do* look a lot like a sphere, but like a 3-sphere (that is, a sphere living in four dimensions), not a 2-sphere (i.e., what we usually call a “sphere”).

In fact, locally *SO(3)* looks almost *exactly* like a 3-sphere; globally, however, it’s a different story. In fact, *SO(3)* looks globally like , which requires one more excursion into the realm of abstraction. , or real projective 3-space, is an abstract space where we’ve taken regular 3-space and added a “plane at infinity”. This sounds slightly wacky, but it’s a generalization of what’s called the projective plane, which is basically the same thing but in a lower dimension. To get the projective plane, we add a “line at infinity” rather than a plane, and the space has this funny property that if you walk through the line at infinity, you get flipped into your mirror image; if you were right-handed, you come out the other side left-handed (and on the “other end” of the plane). But not to worry, if you walk across the infinity line again, you get flipped back to normal.

Okay, sounds interesting, but how do we visualize such a thing? Well, the “line at infinity” thing is good, but infinity is pretty hard to visualize, too. Instead we think about twisting the sphere in a funny way:

You can construct the projective plane as follows: take a sphere. Imagine taking a point on the sphere, and its antipodal point, and pulling them together to meet somewhere inside the sphere. Now do it with another pair of points, but make sure they meet somewhere else. Do this with every single point on the sphere, each point and its antipodal point meeting each other but meeting no other points. It’s a weird, collapsed sphere that can’t properly live in three dimensions, but I imagine it as looking a bit like a seashell, all curled up on itself. And pink.

This gives you the real projective plane, . If you do the same thing, but with a 3-sphere (again, remember that this is the sphere living in four dimensions), you get . Of course, you can’t even really visualize or, for that matter, a 3-sphere, so *really* visualizing is going to be out of the question, but we have a pretty good idea, at least by analogy, of what it is. This is, as von Neumann indicates, one of those things you “just get used to”.

Now, as it turns out, if you do the math, *SO(3)* and look the same in a very precise sense (specifically, they’re diffeomorphic). On the face of it, of course, this is patently absurd, but if you have the right picture in mind, this is the sort of thing you might have guessed. The basic idea behind the proof linked above is that we can visualize 3-space as living inside 4-space (where it makes sense to talk about multiplication); here, a rotation (remember, that’s all the special orthogonal matrices/transformations really are) is just like conjugating by a point on the sphere. And certainly conjugating by a point is the same as conjugating by its antipodal point, since the minus signs will cancel eachother in the latter case. But this is exactly how we visualized , as the points on the sphere with antipodal points identified!

I’m guessing that most of the above doesn’t make a whole lot of sense, but I would urge you to heed von Neumann’s advice: don’t necessarily try to “understand” it so much as just to “get used to it”; the understanding can only come after you’ve gotten used to the concepts and, most importantly, the pictures. Which was really, I suspect, von Neumann’s point, anyway: of course we can understand things in mathematics, but we can only understand them after we suspend our disbelief and allow ourselves to get used to them. And, of course, make good pictures.

^{1}This, by the way, is my second-favorite math quote of the year, behind my complex analysis professor’s imprecation, right before discussing poles vs. essential singularities, to “distinguish problems that are real but not serious from those that are really serious.”

^{2} As a side note, calculus itself is a prime example of mathematical abstraction. The problem with the world is that most of the stuff in it isn’t straight. If it were, we could have basically stopped after the Greeks figured out a fair amount of geometry. And, even worse, not only is non-straight stuff (like, for example, a graph of the position of a falling rock plotted against time) all over the place, but it’s hard to get a handle on. So, instead of just giving up and going home, we *approximate* the curvy stuff in the world with straight lines, which we have a good grasp of. As long as we’re dealing with stuff that’s curvy (rather than, say, broken into pieces) this actually works out pretty well and, once you get used to it all, it’s easy to forget what the whole point was, anyway (this, I suspect, is the main reason calculus instruction is so uniformly bad; approximating curvy stuff with straight lines works *so well* that those who who are supposed to teach the process lose sight of what’s really going on).

You sound like an epistemological behaviorist.

Posted by: Curt at February 13, 2005 05:57 AMSpeaking of quotes, it also goes well with what Max Planck said about how new concepts are assimilated in science: "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."

Posted by: Curt at February 13, 2005 06:01 AM*You sound like an epistemological behaviorist.*

How so?

Posted by: shonk at February 13, 2005 12:52 PMBOOOOOOOOORING.

Why can't you guys talk about something more interesting. Like what's happening at NT, or who has a new blog.

Posted by: Aaron at February 13, 2005 01:06 PMThe Hoppe Topic bores me.

Posted by: shonk at February 13, 2005 02:19 PM*Why can't you guys talk about something more interesting. Like what's happening at NT, or who has a new blog.*

Live by the nerds, die by the nerds. Good thing the Internet has saved us from the issue of commercial viability.

Posted by: Curt at February 14, 2005 08:46 AM*In other words there doesn't seem to be any intermediate stage of understanding a concept between learning it and reproducing it (or applying it in the more PC language).*

I guess that depends on what your definition of "learning it" is. Certainly one can know the statement of a theorem, or even its proof, without really being able to apply it, and obviously knowing what the theorem says is a first step to acquiring a full understanding of it, but yeah, I guess I would mostly agree with the notion that you don't "really" know the theorem until you know how to use it. For example, I can tell you what Freyd's Embedding Theorem says, but I can't really tell you why it's true or how to use it (okay, I have some vague idea of how it's used, but still).

But I'm not sure I would generalize this; there are some results that are almost ends in themselves, like the Incompleteness Theorem or Fermat's Last Theorem, stuff that you wouldn't necessarily apply, but that there is value in knowing how those results were achieved. Of course, in some instances this is because the proof is actually more profound than the result (e.g. Wiles' proof of FLT; his true achievement was in proving Taniyama-Shimura, which is massively useful).

And certainly outside of mathematics I don't think the notion universalizes at all; it doesn't even really make sense to talk about "applying" knowledge of, say, *Don Quijote*, but that doesn't mean that one can't understand it (at least to some relatively high degree), nor that understanding it isn't important or desirable.

I'm not talking about *Don Quijote* or Fermat's Last Theorem; I was just interpreting what you said about the geometric concepts that you use and apply on a regular basis but can't really conceptualize.

Ah, right. Well, okay, although with the caveat that I can conceptualize most of it abstractly, I just can't necessarily visualize it. And, to be technical, visualizing isn't *mandatory*; as you can tell if you read the linked proof, the proofs are, ultimately, algebraic. But visualizing helps tremendously in terms of guiding one's intuition, telling you what *should* be true; then you use the algebra to justify the intuition.

Although I've never done either, I imagine it's a lot like painting a picture or writing a symphony; you can see/hear the scene/song in your head, but there's still a lot of technical hurdles to be overcome in terms of transferring what's in your head to canvas/paper...and, sometimes, it just can't be done, or it turns out that what looked/sounded good in one's head is just crap once it's been tangibly reproduced.

Posted by: shonk at February 14, 2005 09:44 PMJust came across a quote that encapsulates much of what I was saying very nicely:

"The art of doing mathematics consists in finding that special case which contains all the germs of generality."

—David Hilbert

Posted by: shonk at February 18, 2005 10:42 PMPowered by Movable Type 2.661

Valid CSS | Valid XHTML | Valid RSS | Valid Atom