February 26, 2005

Bentham's mummified corpse, like Lenin's, remains fresh in appearance

Posted by Curt at 08:27 AM | permalink | 4 comments

It’s almost comforting that such invidious fluffy-minded sludge as this is floating around, as it seems, like religion, to keep the middle-brows hypnotized by “beautiful sentiments” which are so vague as to keep them from actually getting together and doing anything. It’s sort of weird to hear this weakly Marxist social-democratic pap which used to be shouted from the rooftops now being whispered in a low monotonous whine. The author avows his fealty to Jeremy Bentham, not Marx, and calls it utilitarianism not Marxism, but there are many illegitimate fathers along this line of thought.

The root of the idea is that, now that neuroscience has supposedly made it possible to actually identify what makes us happy, the idea of happiness has become quantifiable, and hence a program of providing the greatest happiness to the greatest number of people has become objectively possible. However, the author does not make the slightest effort to apply these wonders of modern science to actually determining what the alleged sources of human happiness are. The neuroscience tack is really just a defensive ploy to ward off the eternal charges that utilitarinism is simply a euphemism for an authoritarian imposition of values. As for espousing his positive program for what constitutes human happiness, it is simply the usual liberal middle-class canards, with not surprisingly a socialist edge: more time to spend with family, a decent wage for everyone, blah blah blah. But he seems to make two pretty criminally unsubstantiated assumptions: one is these sources are essentially the same for everyone, or at least could be under certain conditions, and the other is that they do not inherently conflict with anyone else’s.

I say under certain conditions could be, because in evaluating our current society he seems to privilege envy of other’s material well-being as the principal determinant of happiness. His theory is that above a certain level of material subsistence people are motivated primarily by status-seeking and the desire for a high rank within their social group. Therefore, the increasing wealth of the society will not increase happiness because people measure their well-being relative to the group, not by their absolute prosperity. This is always been a flaw in the concept of the “war against poverty”; I’m not sure it’s much of an argument for socialist economic redistribution. But actually if you read his section on the value of income taxes carefully, he doesn’t even seem to be arguing that they are useful insofar as they can be redirected to the less prosperous, although he does evidently believe that a certain amount of money contributes more to the happiness of a poor person than to a rich one’s. Rather, he seems to think that taking money away from the properous is valuable in and of itself, because it will supposedly make them less focused on the “rat race,” more family-oriented, etc., etc. In short he seems to be advocating a net impoverishment of society.

All of which may be consistent with the program of a good little socialist, but does not necessarily accord marvelously with his own evidence about the supposedly quantified happiness of humanity. The research that he cites non-specifically supposedly indicates that people’s feeling of happiness has not risen in the last half-century, but he does not cite anything which indicates that it has necessarily declined. He cites rising rates of depression and crime as presumably implicit indicators of greater unhappiness, but he does not seem to acknowledge the possibility that in our hyper-medicated and surveillance-based society perhaps people simply report depression and crime more. In any event, if roughly similar numbers of people today as in the ‘50’s report themselves happy (and we believe them), despite the increase in prosperity, that might perhaps indicate that happiness is not fixed to material well-being. Which may be consistent with his general point, but not with his idea of increasing happiness by manipulating income levels.

And even if it did, it seems rather difficult to countenance any social program predicated upon appealing to one of humanity’s most depraved instincts, namely envy. The author acknowledges that his ideal of taxation is mainly motivated by the desire to pander to people’s envy, but he seems to think that their envy will be sated by the loss of prosperity of those around them and that after that point there will be no more. So the envy of the less prosperous will be satisfied by the losses accrued by the more prosperous, which will somehow not be counter-balanced by the chagrin of the more prosperous at the prospect of seeing their status diminished. Very logical.

One of the more egregious presumptions of utilitarians is that non-utilitarian social systems somehow aren’t concerned with seeking the greatest good for the greatest number of people. On the contrary, that’s the defining problem of practically every social and political theory I can think of, and they all either seek or claim to have found the answer—whether such a solution exists, I have my doubts, but that’s why I’m a skeptic about politics. This is a handy trick by utilitarians: they say “I believe in the greatest good for the greatest number of people.” Which is practically begging the question: “As opposed to whom?” It’s useful because it tends to conceal the fact that their real agenda is generally somewhat more specific, and tends to consist in the autocratic notion that one or two measures of social living can be authoritatively determined to be the sources of happiness, and then divided up in a centralized fashion. Those that are the most insistent on the idea of liberty are generally those that are the most skeptical about the possibility of the notion of happiness being either quantitatively defined or generalizable. In other words, only indviduals can determine their own sources of happiness.

For the author, on the other hand, the fact that certain stimuli trigger certain areas of the brain at the times when test subjects profess pleasure has solved the problem of determining happiness. Of course, as mentioned, he never really bothers with the results that those studies have yielded. Somehow the fact that he considers envy to be a principal element of human happiness does not place very severe limits on the harmoniousness of individual happiness. Nor does it constitute a tyranny of the majority, because he claims that in an ideal utilitarian society the happiness of the most unhappy would be considered of pre-eminent importance. Of course, at the beginning of the article he cited the equal importance of each individual’s happiness as the fouding tenet of his theory, but I’m sure it all sorts out in the end.

Among social factors responsible for unhappiness, he cites divorce and unemployment as of pre-eminent importance. Of course, rates of both divorce and unemployment in the crassly materialistic and religious United States are much lower than in the much more overtly utilitarian-embracing Europe, but it would be a bit embarassing for him to admit this after avowing that all traditional value-systems outside of utilitarianism and “individualism” are dead.

Personally the question of the greatest happiness for the greatest number of people doesn’t exactly compel me constantly, although the issue of personal happiness tends to impose itself intransigently. I would have thought that evolutionary biology would have provided an adequate explanation of this, as well as the recurrence of what we call altruism. But such an idea of course suggests that happiness, whatever that is, is not really the point of our little existences, and that the more imperious competitiveness of life will ultimately subvert all of these little trifles of pleasure and pain. But in the meantime, we have these debased statistical notions of happiness to amuse us in an idle hour.

It seems to me that if one’s “objective” measure of happiness is electrical stimulation in the cerebral cortex, the most efficient utilitarian solution to the problem of human happiness would be strap everyone onto hospital gurneys and stimulate the “happiness” part of their brain all day long. If one does not wish to be this deterministic about it, perhaps one should allow more latitute to individuals to discover their own conception of happiness. Personally, I have found happiness generally to be an idea for the unhappy and something rarely spoken of by the happiness; mention of practically guarantees that it is not present in the environment where it is uttered. I don’t deny that what you might call love is the real bridge between personal happiness and moral obligations, and the only true means by which the desires of oneself and of others are united, but such a sentiment can never be mandated; it is entirely resistant to intellectual compulsion. Utilitarianism, which sometimes does a decent job of faking morality, is nevertheless ultimately predicated on the pleasure principle, and hence is wholly inadequate to uniting the moral and the pleasurable except when love truly pertains. In that case, of course, political theory is entirely superfluous, which is why this is all a waste of time.

p.s. I don’t claim that people’s behavior necessarily reflects what really would make them happy, but presumably it does at least reflect what they consciously value. Hence, if I were the author I would have been a bit skeptical of using the results of “surveys” of what people claim to value when the results don’t correlate with their behavior, i.e. they claim that spending time with family is most important, but they spend a disproportiante amount of time working (at least according to him). So either people are not really being forthright (consciously or unconsciously) in responding to surveys, or there is not actually a problem of priorities. In either case, he’s way over-valuing surveys as a guide to what will make people happy.

February 13, 2005

"...you just get used to them"

Posted by shonk at 04:37 AM | permalink | 10 comments

“Young man, in mathematics you don’t understand things, you just get used to them.” —John von Neumann1

This, in a sense, is at the heart of why mathematics is so hard. Math is all about abstraction, about generalizing the stuff you can get a sense of to apply to crazy situations about which you otherwise have no insight whatsoever. Take, for example, one way of understanding the manifold structure on SO(3), the special orthogonal group on 3-space. In order to explain what I’m talking about, I’ll have to give several definitions and explanations and each, to a greater or lesser extent, illustrates both my point about abstraction and von Neumann’s point about getting used to things.

First off, SO(3) has a purely algebraic definition as the set of all real (that is to say, the entries are real numbers) 3 × 3 matrices A with the property ATA = I and the determinant of A is 1. That is, if you take A and flip rows and columns, you get the transpose of A, denoted AT; if you then multiply this transpose by A, you get the identity matrix I. The determinant has its own complicated algebraic definition (the unique alternating, multilinear functional…), but it’s easy to compute for small matrices and can be intuitively understood as a measure of how much the matrix “stretches” vectors. Now, as with all algebraic definitions, this is a bit abstruse; also, as is unfortunately all too common in mathematics, I’ve presented all the material slightly backwards.

This is natural, because it seems obvious that the first thing to do in any explication is to define what you’re talking about, but, in reality, the best thing to do in almost every case is to first explain what the things you’re talking about (in this case, special orthogonal matrices) really are and why we should care about them, and only then give the technical definition. In this case, special orthogonal matrices are “really” the set of all rotations of plain ol’ 3 dimensional space that leave the origin fixed (another way to think of this is as the set of linear transformations that preserve length and orientation; if I apply a special orthogonal transformation to you, you’ll still be the same height and width and you won’t have been flipped into a “mirror image”). Obviously, this is a handy thing to have a grasp on and this is why we care about special orthogonal matrices. In order to deal with such things rigorously it’s important to have the algebraic definition, but as far as understanding goes, you need to have the picture of rotations of 3 space in your head.

Okay, so I’ve explained part of the sentence in the first paragraph where I started throwing around arcane terminology, but there’s a bit more to clear up; specifically, what the hell is a “manifold”, anyway? Well, in this case I’m talking about differentiable (as opposed to topological) manifolds, but I don’t imagine that explanation helps. In order to understand what a manifold is, it’s very important to have the right picture in your head, because the technical definition is about ten times worse than the special orthogonal definition, but the basic idea is probably even simpler. The intuitive picture is that of a smooth surface. For example, the surface of a sphere is a nice 2-dimensional manifold. So is the surface of a donut, or a saddle, or an idealized version of the rolling hills of your favorite pastoral scene. Slightly more abstractly, think of a rubber sheet stretched and twisted into any configuration you like so long as there are no holes, tears, creases, black holes or sharp corners.

In order to rigorize this idea, the important thing to notice about all these surfaces is that, if you’re a small enough ant living on one of these surfaces, it looks indistinguishable from a flat plane. This is something we can all immediately understand, given that we live on an oblate spheroid that, because it’s so much bigger than we are, looks flat to us. In fact, this is very nearly the precise definition of a manifold, which basically says that a manifold is a topological space (read: set of points with some important, but largely technical, properties) where, at any point in the space, there is some neighborhood that looks identical to “flat” euclidean space; a 2-dimensional manifold is one that looks locally like a plane, a 3-dimensional manifold is one that looks locally like normal 3-dimensional space, a 4-dimensional manifold is one that looks locally like normal 4-dimensional space, and so on.

In fact, these spaces look so much like normal space that we can do calculus on them, which is why the subject concerned with manifolds is called “differential geometry”. Again, the reason why we would want to do calculus on spaces that look a lot like normal space but aren’t is obvious: if we live on a sphere (as we basically do), we’d like to be able to figure out how to, e.g., minimize our distance travelled (and, thereby, fuel consumed and time spent in transit) when flying from Denver to London, which is the sort of thing for which calculus is an excellent tool that gives good answers; unfortunately, since the Earth isn’t flat, we can’t use regular old freshman calculus.2 As it turns out, there are all kinds of applications of this stuff, from relatively simple engineering to theoretical physics.

So, anyway, the point is that manifolds look, at least locally, like plain vanilla euclidean space. Of course, even the notion of “plain vanilla euclidean space” is an abstraction beyond what we can really visualize for dimensions higher than three, but this is exactly the sort of thing von Neumann was talking about: you can’t really visualize 10 dimensional space, but you “know” that it looks pretty much like regular 3 dimensional space with 7 more axes thrown in at, to quote Douglas Adams, “right angles to reality”.

Okay, so the claim is that SO(3), our set of special orthogonal matrices, is a 3-dimensional manifold. On the face of it, it might be surprising that the set of rotations of three space should itself look anything like three space. On the other hand, this sort of makes sense: consider a single vector (say of unit length, though it doesn’t really matter) based at the origin and then apply every possible rotation to it. This will give us a set of vectors based at the origin, all of length 1 and pointing any which way you please. In fact, if you look just at the heads of all the vectors, you’re just talking about a sphere of radius 1 centered at the origin. So, in a sense, the special orthognal matrices look like a sphere. This is both right and wrong; the special orthogonal matrices do look a lot like a sphere, but like a 3-sphere (that is, a sphere living in four dimensions), not a 2-sphere (i.e., what we usually call a “sphere”).

In fact, locally SO(3) looks almost exactly like a 3-sphere; globally, however, it’s a different story. In fact, SO(3) looks globally like RP3, which requires one more excursion into the realm of abstraction. RP3, or real projective 3-space, is an abstract space where we’ve taken regular 3-space and added a “plane at infinity”. This sounds slightly wacky, but it’s a generalization of what’s called the projective plane, which is basically the same thing but in a lower dimension. To get the projective plane, we add a “line at infinity” rather than a plane, and the space has this funny property that if you walk through the line at infinity, you get flipped into your mirror image; if you were right-handed, you come out the other side left-handed (and on the “other end” of the plane). But not to worry, if you walk across the infinity line again, you get flipped back to normal.

Okay, sounds interesting, but how do we visualize such a thing? Well, the “line at infinity” thing is good, but infinity is pretty hard to visualize, too. Instead we think about twisting the sphere in a funny way:

You can construct the projective plane as follows: take a sphere. Imagine taking a point on the sphere, and its antipodal point, and pulling them together to meet somewhere inside the sphere. Now do it with another pair of points, but make sure they meet somewhere else. Do this with every single point on the sphere, each point and its antipodal point meeting each other but meeting no other points. It’s a weird, collapsed sphere that can’t properly live in three dimensions, but I imagine it as looking a bit like a seashell, all curled up on itself. And pink.

This gives you the real projective plane, RP2. If you do the same thing, but with a 3-sphere (again, remember that this is the sphere living in four dimensions), you get RP3. Of course, you can’t even really visualize RP2 or, for that matter, a 3-sphere, so really visualizing RP3 is going to be out of the question, but we have a pretty good idea, at least by analogy, of what it is. This is, as von Neumann indicates, one of those things you “just get used to”.

Now, as it turns out, if you do the math, SO(3) and RP3 look the same in a very precise sense (specifically, they’re diffeomorphic). On the face of it, of course, this is patently absurd, but if you have the right picture in mind, this is the sort of thing you might have guessed. The basic idea behind the proof linked above is that we can visualize 3-space as living inside 4-space (where it makes sense to talk about multiplication); here, a rotation (remember, that’s all the special orthogonal matrices/transformations really are) is just like conjugating by a point on the sphere. And certainly conjugating by a point is the same as conjugating by its antipodal point, since the minus signs will cancel eachother in the latter case. But this is exactly how we visualized RP3, as the points on the sphere with antipodal points identified!

I’m guessing that most of the above doesn’t make a whole lot of sense, but I would urge you to heed von Neumann’s advice: don’t necessarily try to “understand” it so much as just to “get used to it”; the understanding can only come after you’ve gotten used to the concepts and, most importantly, the pictures. Which was really, I suspect, von Neumann’s point, anyway: of course we can understand things in mathematics, but we can only understand them after we suspend our disbelief and allow ourselves to get used to them. And, of course, make good pictures.

1 This, by the way, is my second-favorite math quote of the year, behind my complex analysis professor’s imprecation, right before discussing poles vs. essential singularities, to “distinguish problems that are real but not serious from those that are really serious.”

2 As a side note, calculus itself is a prime example of mathematical abstraction. The problem with the world is that most of the stuff in it isn’t straight. If it were, we could have basically stopped after the Greeks figured out a fair amount of geometry. And, even worse, not only is non-straight stuff (like, for example, a graph of the position of a falling rock plotted against time) all over the place, but it’s hard to get a handle on. So, instead of just giving up and going home, we approximate the curvy stuff in the world with straight lines, which we have a good grasp of. As long as we’re dealing with stuff that’s curvy (rather than, say, broken into pieces) this actually works out pretty well and, once you get used to it all, it’s easy to forget what the whole point was, anyway (this, I suspect, is the main reason calculus instruction is so uniformly bad; approximating curvy stuff with straight lines works so well that those who who are supposed to teach the process lose sight of what’s really going on).

January 28, 2005

The anthropomorphism of religion

Posted by Curt at 04:07 PM | permalink | 5 comments

I might deduce one final consequence of a skepticism in regards to temporality and causality. If our only experience of the world is of an existent reality, such that something uncreated or destroyed is literally unimaginable, the superfluity of religion becomes very evident. Since it is on the basis of a parallel between finite objects, which are presumed to be necessarily created, and the universe in its totality, which in turn therefore needs its Creator, that modern religions ultimately justify themselves, if creation, rather than lack of creation, is taken to be the phenomenon unjustified by experience then the concept of God is unwarranted.

November 07, 2004

A beginner's guide to producing new results in mathematics

Posted by shonk at 06:45 PM | permalink | 1 comment
  • First, choose a problem that nobody’s ever solved before. Be careful, though: the most common beginner’s mistake is to choose a problem that’s much too difficult. Fermat’s Last Theorem was an old favorite, but since Andrew Wiles proved that a while back, the new favorites are the Riemann Hypothesis and the Poincaré Conjecture. Rather, as a beginner, you want to pick something that’s within your grasp. I suggest picking two random 200-digit numbers (no need to make them up on your own; get your computer to do that) and adding them together. With probability 1, nobody in the history of the world has ever added these two numbers together before.

  • Next, plug it into Mathematica or Maple. No red-blooded, God-fearing mathematician would ever dream of trying to solve a problem without plugging it into a computer and seeing what the answer is first. Sure, the numerical solution will probably be so vague as to be worthless, but this is how things are done.

  • Now it’s time to get down to business: write the problem up on the chalkboard in your office. Of course, you could just write it down on a sheet of paper, but there are several objections to that. First, paper is much too permanent; if you end up getting stuck and you’re working on a blackboard, you can always just claim the janitor erased your chalkboard at a crucial stage. Second, it’s extremely important to always have cryptic and incomprehensible scribbles filling your chalkboard to intimidate students and colleagues that might drop by. It’s even better if you have three or four calculations overlapping eachother on the board.

  • You’ve got the problem written down on your chalkboard, so now there’s nothing left to do but to solve the damn thing. Remember to take your time. Cross pieces out, even though you could just erase them. Once you’ve made some progress (and built up a goodly amount of chalk on your fingers), step back to ponder the next step. Rub your chin. Run your hands through your hair. Smooth down your shirtfront. Take a bathroom break. Do whatever it takes to transfer the chalk on your hands to various other parts of your anatomy and attire. Under no circumstances should you try to remove any of this chalk before going to teach your next class. Remember, having chalk smeared all over your face, hair, shirt and crotch is all part of the cachet.

  • At some point throughout the course of the above, we’ll assume you’ve actually solved the problem. Contemplate the beauty of your solution. Write it down on paper in an indecipherable hand. If you feel like it, consider typing it up, but be sure to include a few errors. This is necessary so as to confuse the readers of your solution and throw them off your track. Whenever there’s a difficult calculation, simply gloss over it and state the end result; odds are none of your readers will actually expend the effort to do the calculation themselves, so you’ve covered your ass in case you flipped a sign somewhere.

Congratulations! You’ve just proved a new result in mathematics. Of course, if you really just added two random gigantic numbers, the probability that anybody cares is 0. Better luck next time.

November 01, 2004

Take that, logical positivists!

Posted by Curt at 11:38 AM | permalink | 3 comments

A propos of my brother’s last post and the comments regading it, here’s another example of a flaw in the application of logical principles to reality. Context: in my philosophy of science class I’m reading “The Philosophy of Natural Science” by Carl Hempel. Hempel was a member of the Vienna Circle, much devoted to Carnap, etc. Today he is probably most famous for formulating the so-called “paradox of confirmation” for logical statements, which as I understand goes as follows (I’m using his example): given two logically equivalent statements, such as for example (1)”all crows are black” and (2)”all non-black objects are not crows,” any evidence p which supports one of the statements supports the other as well. Hence, for example the statement p “object x is not black and is not a crow” supports statement (1) as well as statement (2), therefore any non-black non-crow, a fish, a book, a blueberry pie, all provide evidence that crows are black. The obvious response is that this is ludicrous, since p has nothing to do with crows and is therefore irrelevant to the question of whether crows are all black. But let us re-imagine the question. While it may seem that taking non-black non-crows at randomn would provide no evidence regarding the color of crows, if all of the non-black objects in the world were gathered and recorded and none of them were crows, would one not have to concede that the complementary point, “all crows are black,” would have been proved? Or take another example: say we had a box with 10 objects in it, of which an uncertain number werere crows and 4 of the objects were black, and say one decided to test the two statements “all crows in the box are black” and “all non-black objects in the box are not crows.” If an object were pulled out of the box at randomn and proved to be not black and not a crow, then we would know that more than 15% of the non-black objects were not crows, which definitely provides indicative evidence for both of the statements. And by the time 5 non-black non-crows had been pulled out, we would be all but certain that all the crows in the box were black. Therefore, I think that the seeming paradox is simply an illusion of scale. Of course, on a practical level, the paradox holds true, at least in this case: the category of non-black non-crows is so huge that finding examples of them probably won’t provide much evidence of anything. Therefore, the principle of logical equivalences does virtually nothing to advance the investigation. And in fact, I think it is likely that the problem would exist not only in empirical investigations but also in the investigation of certain mathematical properties, for example. Something to keep in mind for those who are convinced of the infinite power of logic to solve both abstract and practical problems.

October 31, 2004

From politics to mathematics and back

Posted by shonk at 12:54 AM | permalink | 5 comments

Last night I found myself with an unusually large chunk of time on my hands and, after doing some maintenance work around here that I’d been putting off, decided to catch up on some blog-reading. I read Colby Cosh’s excellent analysis of the ALCS from a week or so ago, enjoyed Billy Beck’s musings on book addiction and rantings on the justice system, caught up on the No Treason/Karen DeCoster/Thomas DiLorenzo shitstorm, uncovered the latest links that appear below in the “External Links” section, and enjoyed Scribbling’s pomegranate pictures. Somewhere along the way, I came across Cosmic Vortex’s “First political diatribe,” which suggests the notion of “political shock levels” as a complement to the future shock levels which extropians go on about. The author lays out a sort of political spectrum, ranging from communist to fascist, and then suggests the following:

Now, its very easy for a socialist and a progressive to discuss issues and come to an understanding, but try to get a socialist and a right wing republican together, and nothing will get accomplished except frustration and anger. Where does this leave us? Not in a good situation really - as theirs no real way to drag anyone more then 1 level away. Even if they did want to try to understand your position, they just couldnt map the concepts over if you jump too far. The cognitive differences would be un-breachable and it would require starting at the begining of their conceptual “tree”, validate every concept along the way, and maybe then something could be worked out.

Interesting idea and stated in a somewhat unique way. What really caught my attention in reading, though, was the sentence I’ve taken the liberty of italicizing. I have to admit, the very first thought that popped into my head upon reading that sentence was: “Sounds like a chain complex!” For those too lazy to click the link, a chain complex is basically a sequence of maps between objects such that moving two steps along always takes you to zero. They arise a lot in topology and homological algebra (for example, I first ran across them while learning about simplicial homology in an algebraic topology class). The connection with shock levels being that if you try to map more than one level down the line you can’t go anywhere but zero, just as the conversation between socialist and republican goes nowhere.

“A nice little analogy,” I though to myself, not quite realizing, for the moment, how loony it would have sounded had I tried to explain it (at this point tenses completely break down, given that I just have tried to explain it). Consider, in addition, how one of my office-mates and I had laughed earlier in the day when she described having just caught herself before asking two of her students (who are identical twins) if they were “isomorphic”. I know I’ve talked about this before (that time when a friend referred to this Strong Bad song as a “canonical techno song”), but I still find the way in which the accumulation of a new vocabulary shapes my outlook either amusing or frightening, depending on the time of day.

Of course, in a sense, the vocabulary is the least important part of what I’ve (hopefully) learned in the last year or so of grad school, but applying the vocabulary outside of its mathematical context is perhaps the most obvious outward sign. Well, one of the most obvious, anyway. Perhaps the other obvious sign of what might be called my increasing mathematical sophistication (or confusion, depending on your perspective) manifests itself in how I answer the questions of my students.

I’m currently teaching four recitation sections of a class innocuously called “Ideas in Mathematics” in the course catalogue, but of which the unofficial course title bestowed by the professor is “Mathematics and Politics”.1 A friend rather uncharitably characterizes it as “math for morons”, in that it’s the only freshman-level non-calculus course that fulfills the college’s math requirement. Anyway, the point is that I spend most of my time answering questions about the homework or the lectures, and, in answering, I often find myself engaging in impromptu monologues about how intuitionists would object to proofs by contradiction or how mathematics only describes the world insofar as it simplifies away the hard bits. And, most importantly, I have a very difficult time answering conceptual questions definitively.

Needless to say, I imagine my students find it frustrating when, for example, they ask “Why is a conditional true when its antecedent is false?” and I have to say something along the lines of the following:

Well, the short answer is because it’s defined that way, and the long answer is still because it’s defined that way, but it’s defined that way because that’s really the most reasonable way to define it. You see, when we’re thinking about whether a logical statement is “true” or “false”, it’s probably best not so much to think actually in terms of “true” and “false”, but rather in terms of compatibility with the world. In other words, can you believe the statement while also believing in reality without contradicting yourself? We only say the statement is “false” if not; otherwise we call it “true” even though it may be counterfactual, absurd, or completely irrelevant to reality.

At this point, I’m usually lucky if the looks I’m getting are merely quizzical. So I try again:

Well, let’s think in terms of an example. Suppose, back in 2002, a friend told you “If the U.N. approves a war in Iraq, France will go to war.” Now, we know that, in reality, the U.N. didn’t approve the war and that France didn’t go to war. So this is a situation where the antecedent and consequent are both false, so, if we’re thinking in terms of logic, we would say the conditional is true. Why? Well, because you can believe what your friend said and also believe in reality. That is to say, you can believe the statement without contradiction. So the assignment of “true” or “false” is more or less like how you treat a friend: because he’s your friend and you trust him not to mislead you, you assume he was telling the truth unless you can definitely prove that he was lying. In this case, the only way you could know he was lying would be if the U.N. had approved the war and France had stayed home (i.e. the antecedent is true and the consequent is false), since that’s not what actually happened, we would say that the statement is “true”.

Having given this explanation more than a half dozen times, it’s been mildly surprising that nobody has actually called me (and, by extension, math) out on it all being a bunch of convoluted bullshit, but I have to imagine some were thinking it. Usually, at this point, seeing the pained expressions on some of the faces staring out at me, I say something along the lines of “Of course, you could just think of this as the definition,” which seems to be a great relief to some. Which is ironic, given that, without the explanation, the notion that things could be this way just because that’s how they’re defined seems entirely unsatisfactory (which, by the way, I completely agree with. Definitions suck without context).

Having spent ten minutes writing about the conditional, I’m not sure it really illustrates the point I’m trying to make. Perhaps more appropriate would be the times that I’ve had to catch myself before I start ranting about epistemology, theories of logic, reductionism and how mathematics education is, essentially, a system of useful lies. Just as a calculus teacher extolls the virtues of the definite integral, talking about how useful it is and how many amazing physical properties it explains without mentioning that, in any actual application integration is not only difficult but usually impossibly difficult, I find myself teaching material which is useful in certain cases but usually too simplistic to be applicable to the real world. I try to point this out as much as possible, but I think it’s still probably misleading.

That having been said, the underlying concepts are, in fact, incredibly deep. It’s difficult, though, to emphasize that what’s important are the concepts, the fundamental ideas which lead us to particular formulas or computations, especially when midterms are looming and homework is due on Friday. I remember one student asking, the night before the midterm, if she ought to memorize a particular counter-example listed on the review sheet. My honest answer was “No, I don’t think you should memorize it; I think you should understand it,” which I don’t think she liked very much.

That question, though, lies at the heart of the topic that I’m apparently (finally) coming around to, which is that there seems to be a fundamental dichotomy in most people’s minds between, say, the humanities and mathematics. I doubt if anybody would ask an English professor, the night before a midterm, if he ought to memorize Joyce’s “The Dead” for the test, but in a math class it seems like a perfectly legitimate question (incidentally, I’m not trying to say that memorizing is completely worthless; in learning a foreign language, for example, unless one is lucky enough to be living in the country where the language being learned is spoken there’s really no way to make progress without memorizing verb conjugations, vocabulary, etc.). The fact that, for whatever reason, mathematics seems to be equated with rote memorization and plugging values into a formula seems to me to be one of the primary reasons that so many people have such a strong aversion to math.

Which is completely understandable, in a way. Memorizing is boring and almost completely lacking in cognitive content, which most people instinctively recognize, and the fact that math is equated with this boring activity is, I think, one of the primary reasons why an aversion to mathematics is considered acceptable even among people who would strongly decry stunted development in other intellectual pursuits. As John Allen Paulos puts it in Innumeracy: “In fact, unlike other failings which are hidden, mathematical illiteracy is often flaunted: ‘I can’t even balance my checkbook.’ ‘I’m a people person, not a numbers person.’ Or ‘I always hated math.’”

As I look back on the above, I hope I’m not giving the wrong impression about my students. They’ve been wonderful, certainly much more perceptive and good-natured than I had any reason to expect, and I hope they’re learning as much as I am. What it comes down to, I think, is that it’s virtually impossible to interact on a daily basis with people whose level of expertise in a given field is significantly less than one’s own without having to think quite a bit both about the nature of that expertise (imperfect though mine still is) and the misconceptions about the field that will inevitably come to light.

Anyway, I’ve now strayed quite far afield of what I originally intended to write, which was a self-deprecatory post about how I’ve become almost stereotypically geeky in grad school, but I guess the above sort of illustrates that point.

1 Actually, a very interesting class. Aside from learning some basic logic and doing some simple proofs, we’ve talked a lot about different voting systems, leading up to the proof of a simplified version of Arrow’s Impossibility Theorem, the full version of which says that there is no voting system (other than a dictatorship, which everybody pretty much agrees isn’t much of a voting system) which satisfies both the Pareto condition (which says that if everybody prefers candidate X to candidate Y, then Y will not win the election) and independence of irrelevant alternatives (i.e. there is no “spoiler effect”). Also, we’ve learned a bit about power indices, namely the Shapley-Shubik and the Banzhaf indices, and are now starting on some basic probability.

August 07, 2004

Geek ink

Posted by shonk at 02:48 AM | permalink | 3 comments

So the other day I’m walking down the street behind a girl in a sundress and gigantic sunglasses (ironically, it wasn’t a particularly sunny day), when I happen to look down at her feet and notice she has an unusual tattoo across the back of her left ankle. It’s very small, but as I’m about to pass her I finally realize that it’s the quadratic formula, tattooed right across her Achilles tendon. Which was a bit of a shock. I mean, lame Celtic scrollwork, flames, barbed wire strands, Chinese characters and various other designs are pretty standard for people to permanently etch on their skin these days, but the quadratic equation?

Being the geek that I am, I admit it posed a bit of a quandary; I mean, that’s a dedication to geekiness that I can admire, but, at the same time, the quadratic equation is sort of, well, middle-schoolish, don’t you think? Let’s just say I would have been more impressed if it had been the Gauss-Bonnet Formula or the Riemann Zeta Function or something.

June 17, 2004

Why social scientists need to throw away their classical paradigms

Posted by shonk at 09:55 PM | permalink | 7 comments

Brian Doss has responded to my response to my initial response to his critique of Steven Strogatz’ book Sync (whew! Have I broken the record for hyperlinks in a single sentence yet?). There’s a lot to cover here, but I’ll try to do what I can with it.

First, he rightly points out that much of his original point is uncontroversial:

My small beef with the concept that there was any sort of ‘the emerging’ science of spontaneous order was in the (I thought) uncontroversial point that the fields of Biology (macro, micro, and molecular) and Economics both concern themselves with spontaneous order and have done so for centuries (more or less) prior to the publication of ‘Sync’. As that was the case, I further noted that since we have 2 sciences studying specific kinds of spontaneous order and that neither science requires mathematics to either understand the subject matter or to gain knowledge in the first place, that perhaps the author of ‘Sync’ should take some hints and possible insights.

This is all true, but at the same time misleading. While biology has generally done a good job describing the spontaneous order processes that come into play in evolution, the “pure” biological approach is not well-suited to, for example, explaining the tertiary structure of proteins. The tertiary structure of a protein is the general shape of the protein, which is determined by the sequence of amino acids of which the protein is made, but which can be surprisingly complex and three-dimensional. One of the big revelations to come since proteins were first understood as sequences of amino acids is that knowing the higher-order structure of a protein is crucial to understanding what it does and how. And scores of mathematicians are intimately involved in trying to understand exactly how these higher-order structures arise.

In fact, mathematical biology is one of the hottest fields in mathematics today, and much of the research in that area stems from attempts to understand the structures of both proteins and of nucleic acids (i.e. DNA and RNA). And, perhaps surprisingly, advances in that field have had bidirectional influences on supposedly “abstract” areas like knot and braid theory.

Also, to borrow the definitive example from Strogatz’ book, biology did a very poor job of explaining the spontaneous order evident in the simultaneous flashing of Thai fireflies. It took some very hard-core mathematics (and some extreme simplifying assumptions) to even begin to explain how millions of fireflies could all flash in unison without having some “master” firefly. Even verifying those explanations (or discrediting them, for that matter) is something that should be experimentally possible, but such an experiment would be very difficult and has not, as yet, been carried out. This fact, along with the fact that the explanation required significant assumptions, is what led me to say in my previous post that “the mathematics of spontaneous order is both several steps ahead of and well behind the real world.”

The point of this digression is simply to suggest that the classical approaches are reaching their limiting points even in biology and that the days where one didn’t need to know mathematics to do chemistry are swiftly fading. Which isn’t to say that mathematics might not benefit from incorporating the techniques of biology and economics as they relate to spontaneous order, but, based on my admittedly very limited understanding of those two subjects, I have my doubts as to how much fruit such an attempt would bear.

Why do I have doubts? Quite simply, because biology and economics have generally done a good job noting that spontaneous order does arise in the relevant areas, but have not done a particularly good job by themselves of explaining why. Which is not to say that the why is not understood, but the best explanations I’ve seen (here I would cite, for example, Dawkins’ The Selfish Geneor pretty much any microeconomics course) derive, ultimately, from game theory, which is itself a distinguished mathematical discipline, dating back to at least [Fermat].

Back to Doss’ post: he rightfully points out that I unfairly posed the following parenthetical question:

(as a side note, both Doss and Swanson, in the original Catallarchy post linked above, seem to reject mathematics because it conflicts with the principles of Austrian economics and the Austrians’ rejection of empirical economics is well-known; so my question is this: if Austrians reject empiricism as well as mathematics (i.e. deduction), how, exactly, do they advocate gaining knowledge? (Of course, I know the answer, but the Austrian-sympathizers would do well, in my opinion, to keep this question in mind)).

Of course, mathematics does not conflict with the principles of Austrian economics/praxeology (although some of the more zealous and narrow-minded Austrian sycophants seem to think it does); rather, it is the application of mathematical methods to economics in parallel to the classical application of mathematics to physics that conflicts with Austrianism. Or, as Doss puts it, the classical “methods appropriate for studying the physical sciences are inappropriate for studying thinking, acting, subjective humans.”

However, I do think that many Austrians have a fairly poor grasp of exactly what mathematics is, which is why I added the disclaimer that they would do well to keep the above question in mind. I grant that this strikes of pedantry, but I think it’s an important point, and Doss seems to be one of those Austrians who doesn’t seem to understand mathematics very well:

Mathematical methods work in the physical sciences (and to a lesser extent in life sciences) because (a) there is an objective, unchanging reality to the physical laws of the universe and therefore it is (b) possible to design experiments where aspects of reality can be held constant, and thus strict, formal, mathematical relationships can be inferred from the data.

The key misunderstanding, I think, derives from a conflation of the terms “mathematical” and “computational”. Not that this is an uncommon confusion: my non-math friends occasionally ask me what it is, exactly, that I do, occasionally jesting that I must be adding some really big numbers. In fact, mathematics is, ultimately, the discipline devoted to determining the abstract structure that logically follows from a particular axiom set. Mathematicians aren’t, generally speaking, taking a particular equation and plugging a bunch of different values into it to see what results.

In this sense, in fact, mathematics is remarkably similar to Austrian economics itself. Doss links to a Mises article which explicitly compares economists to mathematicians, a comparison I’ve made myself many times before. In fact, in my view, the Austrian school is the most mathematical of all schools of economics by a wide margin. As Doss points out, though, “[t]he difference, of course, is that Austrian scholars have followed a verbal logical formalism instead of a mathematical one.” Which is something I have never well understood. I simply cannot understand why the Austrians consistently reject symbolic logic (which I would call “formal logic”, though obviously an Austrian would contend that my definition is incomplete). Which isn’t to say I haven’t seen the arguments, it’s just that I don’t understand them. For example, here’s what Rothbard has to say on the matter in “Toward a Reconstruction of Utility and Welfare Economics” (which parallels his argument in Man, Economy and State):

The suggestion has been made that praxeology is not really scientific, because its logical procedures are verbal ( literary ) rather than mathematical and symbolic. But mathematical logic is uniquely appropriate to physics, where the various logical steps along the way are not in themselves meaningful; for the axioms and therefore the deductions of physics are in themselves meaningless, and only take on meaning operationally, insofar as they can explain and predict given facts. In praxeology, on the contrary, the axioms themselves are known as true and are therefore meaningful. As a result, each step-by-step deduction is meaningful and true. Meanings are far better expressed verbally than in meaningless formal symbols. Moreover, simply to translate economic analysis from words into symbols, and then to retranslate them so as to explain the conclusions, makes little sense, and violates the great scientific principle of Occam s Razor that there should be no unnecessary multiplication of entities.

In a sense, Rothbard is correct to point out that the Austrians’ deduction is slightly different from standard formal logic, because, in formal logic, you are free to use valid propositions in the deduction even though those propositions may not make any sense when translated into natural language, whereas the Austrians want to use propositions that are both valid and meaningful at every step along the way. However, I disagree with his suggestion that, as such, translating into a formal language is a pointless and superfluous step. If Austrian deductions are, in fact, valid, then they should be translatable into formal language and still hold as valid deductions; the fact that many other deductions might be possible in that formal language that would be invalid to the Austrian would be irrelevant. Rothbard’s objection serves, it seems to me, as a valid rationale as to why an Austrian wouldn’t want to deduce in the formal language itself, but I don’t think it at all justifies the apparent disdain the Austrians have for confirming the deductions they’ve already made in this rigorous way (such confirmation being something I’ve asked for before).

Now, Rothbard is quite right that meaning is better expressed in words than in symbols, which is why it would, presumably, be difficult to translate Austrian axioms/deductions into formal language, but difficulty, in and of itself, shouldn’t serve as justification for not attempting the task. I also find it curious that Rothbard employs Occam’s Razor as a scientific principle, as the Austrians seem to be so disdainful of scientific methodology in economics. Besides the fact that Occam’s Razor is totally inapplicable in this realm, anyway, as the notion that “the simplest explanation is usually the best” says nothing about how to go about confirming conclusions that one has reached.

Having digressed again, I want to make the point Doss (and many other Austrians, for that matter) may be closing his mind to mathematical insights that actually buttress his position because he views mathematics through a classical lens. As a matter of fact, modern mathematics, with its investigations into chaos and complexity, is actually making the case that predictive determinism is essentially impossible. As commenter buck40 points out:

One of the main insights [of modern mathematics] is that prediction and control are in most cases false hopes. Those who apply the insights of complex adaptive systems to social sciences do not seek control, do not counsel control. Quite the opposite, they help policy makers understand why efforts to control will surely fail. You might find that they are your allies in a way, that they are all Hayekians in a manner of speaking.

In that light, consider, for instance, the recent No Treason post “Butterflies and Sweatshops,” where John T. Kennedy suggests that not only is the effect of an individual purchase on third world working conditions too small to predict, but that the effect of that purchase simply cannot be predicted to any level of certainty. I think we would all agree that the world economy is more complicated than the three body problem, yet even the three-body problem cannot, in general, be solved explicitly. Hayek and Mises argued that central planners suffered from a knowledge problem, that someone trying to direct the economy could not, practically speaking, acquire enough knowledge to accurately predict how their interventions would affect the economy. Chaos theorists have extended that further, essentially demonstrating that this is not merely a practical problem, but that, in fact, such a prediction is manifestly impossible, even in the abstract. So, chaos/complexity theorists are “all Hayekians in a manner of speaking”.

Similarly, insights in network theory are helping to explain, for example, both the scale-free aspects of internet hyperlinks and the resiliency of the power grid. One can only imagine that a solid grounding in network theory coupled with an understanding of economics might well yield new insights into economic phenomena.

And, finally, it should be pointed out that the work being done by Strogatz and others is demystifying spontaneous order, demonstrating that there’s nothing supernatural about markets or evolution, but rather that the fruits of spontaneous order are all around us and that the mechanisms that underlie this order are often very simple. To return to the fireflies, the simultaneous flashing that is almost certainly a result of the interaction of coupled oscillators is more extensive than could ever be coordinated by some master firefly keeping time.

June 08, 2004

Body-snatching Lorenz equations

Posted by shonk at 04:47 AM | permalink | 13 comments

First off, a bookkeeping note: Curt will be traveling around Europe for the next month, so he likely won’t be posting much, if at all.

Now, I promised last week, in my review of Strogatz’ Sync, that I would devote an entire post to an extended quotation from the book that I found very interesting. Reading the comment thread associated with John Sabotta’s post denouncing evolutionary psychology at No Treason, I was reminded of that promise, so this is that post. In the pertinent passage, Strogatz is discussing a chaos-based encryption system first envisioned by Lou Pecora. Pecora was trying to figure out a way to devise an encryption system based on Lorenz equations, wherein three variables are related to each other in a particular way (specifically, by way of a system of differential equations).

(As a side note, I would point out, apropos my earlier comments on Strogatz book, that this passage serves as an excellent example of both the strengths and weaknesses of the book. The primary strength, aside from Strogatz’ obvious depth of knowledge of the material, is his ability to describe complicated technical mathematics by way of metaphors that make it highly accessible to a lay reader. The primary weakness, at least in my view, is that he doesn’t ever give any of the technical details. Admittedly, systems of differential equations are a bit intimidating, but the fact that the entire book, basically, is written in metaphor is a bit grating)

Anyway, on to the quotation, followed by one or two of my own thoughts:

In technical terms his scheme can be described as follows: Take two copies of a chaotic system. Treat one as the driver; in applications to communications, it will function as the transmitter. The other system receives signals from the driver, but does not send any back. The communication is one-way. (Think of a military command center sending encrypted orders to its soldiers in the field or to sailors at sea.) To synchronize the systems, send the ever-changing numerical value of one of the driver variables to the receiver, and use it to replace the corresponding receiver variable, moment by moment. Under certain circumstances, Pecora found that all the other variables—the ones not replaced—would automatically snap into sync with their counterparts in the driver. Having done so, all the variables are now matched. The two systems are completely synchronized.

This description, though technically correct mathematically, does not begin to convey the marvel of synchronized chaos. To appreciate how strange this phenomenon is, picture the variables of a chaotic system as modern dancers. By analogy with the Lorenz equations, their names are x, y, and z. Every night they perform onstage, playing off one another, each responding to the slightest cues of the other two. Though their turns and gestures seem choreographed, they are not. On the other hand, they are certainly not improvising, at least not in the usual sense of the word. Given where the others are at any moment, the third reacts according to strict rules. The genius is in the artfulness of the rules themselves. They ensure that the resulting performance is always elegant but never monotonous, with motifs that remind but never repeat. The performance is different from minute to minute (because of aperiodicity) and from night to night (because of the butterfly effect), yet it is always essentially the same, because it always follows the same strange attractor.

So far, this is a metaphor for a single Lorenz system, playing the role of the receiver in Pecora’s communication scheme. Now suppose that time stands still for a moment. The laws of the universe are suspended. In that terrifying instant, x vanishes without a trace. In its place stands a new variable, called x’. It looks like x but is programmed to be oblivious to the local y and z. Instead, its behavior is determined remotely by its interplay with y’ and z’, variables in a transmitter far away in another Lorenz system, all part of an unseen driver.

It’s almost like the classic horror movie Invasion of the Body Snatchers. From the point of view of the receiver system, this new x would seem inscrutable. “We’re trying to dance with x but suddenly it’s ignoring all of our signals,” think y and z. “I’ve never seen x behave like that before,” says one of them. “Hey, x,” the other whispers, “is it really you?” But x wears a glazed expression on its face. Just as in the movie, x has been taken over by a pod. It’s no longer dancing with the y and z in front of it—its partners are y’ and z’, unseen doppelgängers of y and z, remote ones in the parallel universe of the driver. In that faraway setting, everything about x’ looks normal. But when teleported to the receiver, it seems oddly unresponsive. And that’s because the receiver’s x has been hijacked, impersonated by this strange x’ coming from out of nowhere. Sensitive souls that they are, y and z make adjustments and modify their footwork. Soon, all becomes right again. The x, y, z trio glides in an utterly natural way, flowing through state space on the Lorenz attractor, the picture of chaotic grace.

But what is so sinister here, and so eerie, is that y and z have now been turned into pods themselves. Unwittingly, they are now dancing in perfect sync with their own doppelgängers, y’ and z’, variables they have never encountered. Somehow, though the sole influence of the teleported x’, subtle information has been conveyed about the remote y’ and z’ as well, enough to lock the receiver to the driver. Now all three variables x, y, and z have been commandeered. The unseen driver is calling the tune.

— pp. 196-7


So far, chaos-based methods have proved disappointingly weak. Kevin Short, a mathematician at the University of New Hampshire, has shown how to break nearly every chaotic code proposed to date. When he unmasked the Lorenzian chaos of Cuomo and Oppenheim, his results set off a mini-arms race among nonlinear scientists, as researchers tried to develop ever more sophisticated schemes. But so far the codebreakers are winning.

— pg. 204

The obvious conclusion is that each variable in these specialized systems in fact encodes the entirety of the system. From a mathematical perspective, I admire the ingenuity required to come up with this sort of scheme. From a metaphysical perspective, though, it’s hard not to find this whole thing vaguely unsettling. After all, people usually respond in fairly predictable ways to outside stimuli. The point I guess I’m stumbling towards is this: the idea that a chaotic system in which the actors respond in predictable ways to the other actors could lead to the sort of lockstep mirroring described above seems, at least to me, to say something very pertinent to the whole determinism/free will debate. Admittedly, the eerieness is largely due to Strogatz’ metaphor, but the fact that the y and z variables, thinking themselves to be operating independently of the y’ and z’ variables, would ultimately end up mirroring those same variables simply because they followed the same rules in reacting to the x/x’ variable is pretty fascinating.

What I’m not trying to do is say that this sort of thing settles the determinism/free will debate. Rather, I’m just pointing out that these new mathematics give new insight into ways in which seemingly independent activity can yield identical results. And, although these are admittedly specialized cases, it’s somewhat surprising (at least on an intuitive level) that there are any circumstances in which variables seemingly reacting to another variable’s independent activities would, in fact, end up exactly duplicating the variables to which that wild variable was itself reacting. The result, of course, being a system in which all three seemed to be mutually reacting, while in fact one was paying no attention at all to the other two. In other words, one has to wonder, at least a little bit, in what direction causality points, exactly.

May 27, 2004

Spontaneous order revisited

Posted by shonk at 02:00 AM | permalink | 5 comments

Back in September, I wrote a post critiquing the responses of Tim Swanson and Brian Doss (of Catallarchy fame) to Stephen Strogatz’ book Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life. Those responses an be found in Brian’s initial post and the ensuing comments thread and suggest that those studying spontaneous order would be best served by following Alfred Marshall’s advice to “burn the mathematics”. My post (which apparently lost the first paragraph or two in the switch to a new domain back in December), is called “Spontaneous Order” and also spawned a response from Neil.

Okay, with the citations out of the way, the issue of the day is whether I still agree with what I said back in September, now that I’ve just finished reading the book; and the answer is largely “yes”. In fact, I don’t think I went nearly far enough in my criticism of the notion that those studying spontaneous order should avoid mathematical formalism. For example, the following was my conclusive summary of the post:

My point is not to demonstrate that the study of spontaneous order is a mathematical discipline, nor that it should be. Rather, I just want to make the point that it has certain similarities to mathematics and, of course, will necessarily need to use mathematical tools in many instances. In fact, though I admit to not knowing nearly enough to be able to have any insights, it seems like mathematics, especially areas of study like graph theory and networks, might be able to shed some light on some of the applications of spontaneous order mentioned by Strogatz

Not very bold, right? Well, I hadn’t actually read the book yet. Now that I have, it’s abundantly clear that anybody who has actually read Strogatz’ book already knows that it is, at heart, a math book; the fact that it’s not publicized as such has more to do with the irrational fear most people have of mathematics. Virtually every result Strogatz cites is a result in pure or applied mathematics, with all the usual deduction and separation from empirical strategies that that entails. Many of these are proofs about idealized models of coupled oscillators, results which probably help explain how, for example, Thai fireflies flash in sync, but building an experiment to actually test this is so difficult that it apparently hasn’t been done to any degree of satisfaction yet (or, if it has, Strogatz doesn’t mention it). The same holds for, say, three-dimensional synchronicity, which has applications to cardiac arrhythmia, but which is discussed in the book purely in terms of mathematics and chemical reactions in a very special kind of fluid.

The point is this: right now, the mathematics of spontaneous order is both several steps ahead of and well behind the real world. It’s several steps ahead in the sense that mathematical explanations of synchronous processes seems, in large measure, to be ahead of the capability of experimental science to confirm (or deny, of course). On the other hand, mathematics is obviously very far behind the real world, as we can’t yet accurately model the spontaneous order that occurs between nerve cells to make our hearts beat, let alone the presumably much more complicated processes occurring within our brains. Whatever the case, reading Strogatz’ book confirmed my suspicion that, in fact, mathematics is essential to the emerging field of spontaneous order (as a side note, both Doss and Swanson, in the original Catallarchy post linked above, seem to reject mathematics because it conflicts with the principles of Austrian economics and the Austrians’ rejection of empirical economics is well-known; so my question is this: if Austrians reject empiricism as well as mathematics (i.e. deduction), how, exactly, do they advocate gaining knowledge? (Of course, I know the answer, but the Austrian-sympathizers would do well, in my opinion, to keep this question in mind)).

This all having been said, Sync was a bit too devoid of mathematical content for my taste, in the sense that, although almost everything in the book boiled down to mathematics, Strogatz explained most of the actual mathematical machinery in terms of analogies to runners on a track or audiences clapping or whatever, whereas I would have liked to see greater mathematical rigor (not necessarily the equations themselves, which are almost certainly too complicated to mean very much to the uninitiated, but rather a more rigorous argument, with references to actual mathematical principles, theorems, etc.). For example, when Strogatz says “[u]sing a theorem from topology, Winfree proved that a twisted scroll ring was impossible, at least as a solitary entity”, it would have been nice if he had explained what theorem, exactly, even if only in the endnotes. Or, as Neil says,

In fact, as [Strogatz] chronicled the mathematical history of sync as an abstract study, I found myself wanting various symbols and equations

As an ego-boost, I’ll point out that in the above quotation from my September post, my suggestion that graph theory and networks “might be able to shed some light on some of the applications of spontaneous order mentioned by Strogatz” was right on, as Chapter Nine of Sync is titled and devoted to “Small-World Networks”.

Now, a couple of quotations to think about (I’ve omitted one interesting and very extended passage, because I want to dedicate and entire, separate post to it, hopefully some time this weekend):

In other words, a dumb rule (majority rule) running on a smart architecture (a small world) achieved performance that broke the world record.

— pg. 251 (Here, Strogatz is talking about the density classification problem for one-dimensional binary automata, where he and one of his students decided to re-wire the binary automata as a small-world network — where most of the connections between automata (think lightbulbs) are locally clustered, but a few are long-distance — and almost immediately, using the simplest algorithm imaginable, were able to solve the problem more consistently than the best algorithm using “dumb architecture”)

Barabási and his team pointed out that scale-free networks [like the Internet or protein interactions in yeast] also embody a compromise bearing the stamp of natural selection: They are inherently resistant to random failures, yet vulnerable to deliberate attack against their hubs. Given that mutations occur at random, natural selection favors designs that can tolerate haphazard insults. By their very geometry, scale-free networks are robust with respect to random failures, because the vast majority of nodes have few links and are therefore expendable. Unfortunately, this evolutionary design has a downside. When hubs are selectively targeted (something that random mutation could never do), the integrity of the network degrades rapidly—the size of the giant component collapses and the average path length swells, as nodes become isolated, cast adrift on their own little islands.

— pg. 257

Helbing and Huberman computed the long-term traffic patterns under a variety of different conditions. When there were only a few vehicles on the road, all the cars sailed past the slower-moving trucks without ever decelerating, while the trucks lumbered along at their maximum safe speed of 55 miles an hour. At higher but still moderate densities of traffic, some unlucky cars found themselves trapped behind trucks for a long time, with no room to pass or switch lanes.

At a critical density of traffic—about 35 vehicles in each lane per mile of road—all the cars and trucks spontaneously synchronized, traveling down the highway like a solid block. Remarkably, out of pure competition, with no coordinator or central authority, a large group of selfish individuals ended up in a cooperative state that was optimal for all of them. (Adam Smith would approve.) This state was optimal in the sense that the flux of traffic was as high as it could be: The number of cars and trucks passing through a given stretch of highway per hour was maximized. It was also the safest way for traffic to flow, because the drivers had no opportunities to change lanes or pass (the maneuvers associated with most accidents). Helbing and Huberman tested their model against data taken from a two-lane Dutch highway and found evidence of the predicted state. At the critical density, the car speeds were at their most stable, as measured by their velocity fluctuations, and lane changing and passing were minimized. Unfortunately—and as the model also predicted—the crystalline state proved to be delicate. At densities just above critical, it melted into a disorganized liquid state, which created opportunities for passing again, leading to unsteady, stop-and-go traffic.

— pgs. 269-70

“In individuals, insanity is rare, but in groups, parties, nations, and epochs it is the rule.”

— Nietzsche, cited on pg. 273

April 16, 2004

A blast from the past

Posted by shonk at 08:38 PM | permalink | comment

Today, I stumbled across this Wired article on gopher, the internet protocol developed at the University of Minnesota way back in 1992. The article brought back memories, because I remember gopher from my middle school days when we spent all our time using Lynx to access gopher and read blonde jokes instead of improving our typing, learning HyperCard, or whatever other useless pursuits the teachers had in mind. Given that the paradigm of the day was the BBS, gopher was quite a revelation. Now, of course, everyone is used to everything on the internet being a mere mouse-click away, but that was all-new 12 years ago.

I was somewhat surprised to learn that not only is gopher still kicking, but a few people are actually trying to bring it into the 21st century. For example, John Goerzen, in addition to maintaining supposedly the largest active gopher server in the world at Quux.org, thinks gopher could be used as dynamic data exchange protocol like XML-RPC and SOAP. He also sees it as a good alternative to current PDA and phone browsers:

“Consider this example: Port-a-Goph, a gopher client in development for Palm OS. Cameron Kaiser wrote this in his spare time and got it working quickly on his own Palm,” he said. “Contrast that with the state of Web browsing on handheld devices: Despite many years to improve them, I still regularly run across websites that simply do not render at all, or render so poorly that they are unusable.”

He’s probably right, but, for whatever reason, people seem to like to re-invent the wheel instead of just re-using proven wheels, so gopher probably will never be more than a tiny geek niche. That all having been said, there’s a lot of good stuff available on gopher servers like gopher://quux.org/, which you can access directly through nice browsers like Firefox. If you’re on IE, your best bet is probably Floodgap’s public gopher proxy, which translates gopher pages to HTML. And, if you’re anything like me, you’ll probably go immediately to the jokes pages, which are filled with a vast assortment of predominately nerdy material.

On the subject of internet protocols and the like, I should mention that last night I finished Neal Stephenson’s latest, The Confusion, which just came out in bookstores this week (for those that don’t get the connection between internet protocols and a historical novel set in the 17th century, I’d suggest a thorough perusal of Stephenson’s other work, including Snow Crash or Cryptonomicon or, if you’re too cheap to spend money on books, the essay “In the Beginning Was the Command Line”). The Confusion is quite good, interleaving or “con-fusing” the two main stories much more seamlessly than did its predecessor, Quicksilver, which I’ve already reviewed. Though I’m not particularly in the book review mood right now, I will say that this trilogy is really growing on me and I’m definitely looking forward to September, when The System of the World comes out. One thing that really stands out about The Confusion is that among the diverse topics with which it deals, one of the primary issues is that of money and markets, especially how they arise and how they work. For more on that, check out the Wired interview with Stephenson (via Catallarchy).

April 04, 2004


Posted by shonk at 01:41 AM | permalink | 8 comments

Some weird coincidences today: I finished reading The Illuminatus! Trilogy, by Robert Shea and Robert Anton Wilson, and kept stumbling across various web pages related to the book in some way during the course of my daily web browsing. Which itself ties into one of the book’s main themes, that of synchronicity.

Anyway, over at Wikipedia, the featured article of the day is that on Emperor Norton I, a thoroughly interesting, though most likely quite insane, San Franciscan from the 19th century who declared himself Emperor of the United States and even issued his own currency. Norton gets a bit of play in Illuminatus! as a sort of Discordian hero and he seems to keep popping up in my reading. Of course, his most famous connection to literature is that he was supposedly the model for the King in Twain’s Huck Finn.

Speaking of Twain, while fooling around with MathWorld, I came across an interesting entry on the beast number, 666, which couldn’t help but remind me of The Number of the Beast, an excellent book by Heinlein, an inveterate admirer of Twain’s. In The Number of the Beast, Heinlein posits a “multiverse” with 66^6 different universes contained within it, many of them (perhaps all of them) created by novelists and storytellers. Which is a conceit mentioned briefly in Illuminatus! and central to another Wilson trilogy, the Schrödinger’s Cat Trilogy.

In fact, I’m a bit surprised, given the numerological bent of much of Illuminatus!, that Shea and Wilson don’t devote any attention to some of the interesting properties of the beast number. For example, 666 is equal to the sum of the squares of the first seven primes, the sum of the numbers from 1 to 6 * 6 (i.e. the sum of the numbers from 1 to 36) and

phi function

where phi denotes the Euler phi, or totient, function

Of course, I think my favorite beast number property is that, writing the parameters of Coxeter’s notation side-by-side, the bimonster can be denoted by 666. Which is interesting because the bimonster is the wreathed product of the monster group by Z2. For those that have no idea what I’m talking about, just take it on faith that the monster group, as one might guess from its name, has a sort of mythical cachet among (certain types of) mathematicians.

Anyway, back to something resembling the English language. Another big theme in Illuminatus! is that of immanentizing the Eschaton, a concept somewhat badly explained in the book as “to cause the end of days”. Now, those hip to the blogosphere scene may recognize “Eschaton” as the name of the name of the blog run by Democratic cheerleader and fellow Philadelphia-dweller atrios. However, I was somewhat surprised to note that the name of the blog is a David Foster Wallace reference; though I can’t stand the blog, I have to give atrios serious props for naming it after the tennis-academy bombardment game from Wallace’s brilliant Infinite Jest (to tie this in further with the math-speak above, I should also mention that Wallace has a pretty good math book called Everything and More: A Compact History of Infinity, which would be, I think, challenging but comprehensible for an interested layman).

And now, in a desperate attempt to salvage some semblance of thematic integrity from this post, here are some interesting quotes from Illuminatus!

“We’re anarchists and outlaws, goddam it. Didn’t you understand that much? We’ve got nothing to do with right-wing, left-wing or any other half-assed political category. If you work within the system, you come to one of the either/or choices that were implicit in the system fro the beginning. You’re talking like a medieval serf, asking the first agnostic whether he worships God or the Devil. We’re outside the system’s categories. You’ll never get the hang of our game if you keep thinking in flat-earth imagery of right and left, good and evil, up and down. If you need a group label for us, we’re political non-Euclideans. But even that’s not true. Sink me, nobody of this tub agrees with anybody else about anything, except maybe what the fellow with the horns told the old man in the clouds: Non serviam.”

— Hagbard Celine, pg. 86

“Just remember: it’s not true unless it makes you laugh. This is the one and sole and infallible test of all ideas that will ever be presented to you.”

— Hagbard Celine, pg. 250

(And Semper Cuni Linctus, the very night that he reamed his subaltern for taking native superstitions seriously, passed an olive garden and saw the Seventeen…and with them was the Eighteenth, the one they had crucified the Friday before. Magna Mater, he swore, creeping closer, am I losing my mind? The Eighteenth, whatshisname, the preacher, had set up a wheel and was distributing cards to them. Now, he turned the wheel and called out the number at which it stopped. The centurion watched, in growing amazement, as the process was repeated several times, and the cards were marked each time the wheel stopped. Finally, the big one, Simon, shouted “Bingo!” The scion of the noble Linctus family turned and fled…Behind him, the luminous figure said, “Do this in commemoration of me.”

“I thought we were supposed to do the bread and wine bit in commemoration of you?” Simon objected.

“Do both,” the ghostly one said. “The bread and wine is too symbolic and arcane for some folks. This one is what will bring in the mob. You see, fellows, if you want to bring the Movement to the people, you have to start from where the people are at. You, Luke, don’t write that down. This is part of the secret teachings.”)

— pg. 324

The most thoroughly and relentlessly Damned, banned, excluded, condemned, forbidden, ostracized, ignored, suppressed, repressed, robbed, brutalized and defamed of all Damned Things is the individual human being. The social engineers, statisticians, psychologists, sociologists, market researchers, landlords, bureaucrats, captains of industry, bankers, governors, commissars, kings and presidents are perpetually forcing this Damned Thing into carefully prepared blueprints and perpetually irritated that the Damned Thing will not fit into the slot assigned to it. The theologians call it a sinner and try to reform it. The governor calls it a criminal and tries to punish it. The psychotherapist calls it neurotic and tries to cure it. Still, the Damned thing will not fit into their slots.

— Hagbard Celine, from Never Whistle While You’re Pissing, pg. 385

It was the chains of communication, not the means of production, that determined a social process; Marx had been wrong, lacking cybernetics to enlighten him.

— pg. 388

“Everybody was lying to the FBI and CIA, sir. They were all afraid of punishment for various activities forbidden by our laws. No variation or permutation on their stories will hang together reasonably. Each witness lied about something, and usually about several things. The truth is other than it appeared. In short, the government, being an agency of punishment, acted as a distorting factor from the beginning, and I had to use information-theory equations to determine the degree of distortion present. I would say that what I finally discovered may have universal application: no governing body can ever obtain an accurate account of reality from those over whom it holds power. From the perspective of communication analysis, government is not an instrument of law and order, but of law and disorder. I’m sorry to have to say this so bluntly, but it needs to be kept in mind when similar situations arise in the future.”

— Fred Filiarisus, pgs. 423-4


FREE MARKET: That condition of society in which all economic transactions result from voluntary choice without coercion.

THE STATE: That institution which interferes with the Free Market through the direct exercise of coercion or the granting of privileges (backed by coercion).

TAX: That form of coercion or interference with the Free Market in which the State collects tribute (the tax), allowing it to hire armed forces to practice coercion in defense of privilege, and also to engage in such wars, adventures, experiments, “reforms,” etc., as it pleases, not at its own cost, but at the cost of “its” subjects.

PRIVILEGE: From the latin privi, private, and lege, law. An advantage granted by the State and protected by its powers of coercion. A law for private benefit.

USURY: That form of privilege or interference with the Free Market in which one State-supported group monopolizes the coinage and thereby takes tribute (interest), direct or indirect, on all or most economic transactions.

LANDLORDISM: That form of privilege or interference in the Free Market in which one State-supported group “owns” the land and thereby takes tribute (rent) from those who live, work, or produce on the land.

TARIFF: That form of privilege or interference in the Free Market in which commodities produced outside the State are not allowed to compete equally with those produced inside the State.

CAPITALISM: That organization of society, incorporating elements of tax, usury, landlordism, and tariff, which thus denies the Free Market while pretending to exemplify it.

CONSERVATISM: That school of capitalist philosophy which claims allegiance to the Free Market while actually supporting usury, landlordism, tariff, and sometimes taxation.

LIBERALISM: That school of capitalist philosophy which attempts to correct the injustices of capitalism by adding new laws to the existing laws. EAch time conservatives pass a law creating privilege, liberals pass another law modifying privilege, leading conservatives to pass a more subtle law recreating privilege, etc., until “everything not forbidden is compulsory” and “everything not compulsory is forbidden.”

SOCIALISM: The attempted abolition of all privilege by restoring power entirely to the coercive agent behind privilege, the State, thereby converting capitalist oligarchy into Statist monopoly. Whitewashing a wall by painting it black.

ANARCHISM: That organization of society in which the Free Market operates freely, without taxes, usury, landlordism, tariffs, or other forms of coercion or privilege. RIGHT ANARCHISTS predict that in the Free Market people would voluntarily choose to compete more often than to cooperate. LEFT ANARCHISTS predict that in the Free Market people would voluntarily choose to cooperate more often than to compete.

— Hagbard Celine, from Never Whistle While You’re Pissing, pgs. 622-4

March 29, 2004

A tale of a theism scorned--continued!

Posted by Curt at 07:15 PM | permalink | 25 comments

This continuation of the earlier discussion of the merits of atheism will no doubt attract the interest of only the most devoted pedants, but the epistemological issues run so deep here that they have already exhausted many minds and millions of words through the course of history, so another couple may not be overly gregarious here. First, I should like to thank The Serpent for pointing out an error in my definition of solipsism in the comment box to my previous entry on this subject. I defined the solipsist as a sort of empirical agnostic, unsure of the existence of the outside world and believing that such knowledge is probably unattainable.

In fact, as was pointed out to me, this is not the commonly accepted definition of solipsism, which is, according to the OED, the “view that self is the only object of real knowledge or the only thing really existent.” So one can see that the first definition, that “self is the only object of real knowledge,” is essentially comparable to my definition, but the second definition, that oneself is “the only thing really existent,” goes rather beyond it.

However, I have always felt that the movement from acceptance of the first proposition, that “self is the only object of real knowledge,” or in other words that the existence of the outside world cannot be verified, to belief in the second, that self is “the only thing really existent,” is just as logically unjustified as the atheistic movement from acceptance of the idea that the existence of God cannot be ascertained to the belief that God does not exist. Consequently, I have tended to take the first definition as my personal definition of solipsism, which fairly well describes my own personal feelings, and reject the second. However, I am aware that this is somewhat idiosyncratic and confusing for those who associate solipisism with the second definition; my apologies for any confusion created.

However, even this “agnostic” form of solipsism is subject to the criticism that I insinuated before, that it would seem to be debilitating and anti-useful if on the basis of it, qua the most rigorous standards of reason, one affected to refuse to accept the existence of anything in the external world. Now of course one might ask why even after reaching this point one would be prevented from accepting the existence of the external world.

The reason is that the the vey essence of reason, in my opinion, is the principle of refusing to accept that which cannot be established absolutely as true. Now, I accept Descartes’ argument that only the existence of our own thoughts can be absolutely estabished to be true (well, actually only the existence of my thoughts). The existence of the external world, on the other hand, cannot be. As I have said before, I do not necessarily subscribe to the view that one should only accept that which has been absolutely established as true, but if one one wishes to do so, then one, to the best of my knowledge, cannot accept the existence of the external world either on the merits of our perceptions or based on the existence of our thoughts.

One could perhaps accept the existence of the world provisionally if some other self-evident certain belief justified it; Descartes, for example, offers the existence of God, who would not deceive us as to the existence of the outside world, as this justifying belief, but his argument for the existence of God is much less convincing than his argument for the existence of mind. In any case, the immediate point, in any case, is that the existence of the external world cannot be justified on the basis of our perceptions, and the ultimate point is that if atheists wish to apply this rigourous standard of verification to justify their denial of the existence of God, for the sake of consistency they should also apply that standard to the external world and deny its existence.

Since I do not know anyone who has managed to exist for 10 minutes without accepting at some level the existence of the outside world, the verification principle, reason, does not seem to be a sufficient criterion for accepting or rejecting the most fundamental concepts of our existence. Now it could be opposed to this that while it is obviously necessary to accept the existence of the world in order to continue living, belief in the existence of God is not critical to our lives and so we can uphold our intellectual integrity in denying its existence in a way that we cannot with the outside world.

But this type of argument actually just proves my point, because here this distinction is not a logical one, but simply a matter of priorities. We find it necessary to accept the existence of food, for example, so we do; God’s existence is not necessary, so we do not accept it. Therefore, we can see that the real basis for belief is not rational, impersonal, but rather subjective and personal, which I do not necessarily consider an intellectual tragedy, but simply the way things in reality are and always have been. If we accept that the ultimate criterion of our beliefs is our subjective, personal needs, then we can perhaps more directly address those needs and even maybe attain a measure of happiness, at any rate more so than if we persist in adhering to abstract, illusory paradigms.

EDIT Bad link to fake e-mail address removed by shonk

March 19, 2004

Some thoughts on logic

Posted by shonk at 11:57 AM | permalink | 3 comments

Over at mock savvy, Neil begins his investigation of logic with an attempt to categorize exactly what logic is:

By nature, logic tends to escape exact definition; it is, vaguely put, the study of thought; and while one of the most intrinsic qualities of humanity, it does not lend itself to an intuitive characterization. This definition of logic, “the study of thought,” is equivalent to “thought about thought,” and although circular and nondescript, it is not wildly insufficient, as the reader by this point in his or her life has undoubtedly experienced the phenomenon of thought. For the purpose at hand, this simple definition provides an initial locus for the investigation of logic: the qualification of human thought.

I have to admit, I don’t particularly like this definition of logic. Neil is saying that “logic” is functionally equivalent to “metathought”. Maybe I should hold back from jumping into the fray until he fleshes the idea out a little further, but I’m not at all convinced, at this point, that metathought is necessarily logical. When I think about my thoughts, I can just as easily find myself being reflective, nostalgic, arational and even irrational as logical. For example, if I think about my thoughts and beliefs from my high school years, I often do so with a mixture of fascination and contempt. I don’t necessarily subject my thoughts from those days to a rigorous analysis, though often, of course, I do.

My point is this: as I see it, logic is a mode of thought that can be applied to virtually any subject matter, rather than a particular category of thought that can only be applied to particular subjects (other thoughts, in Neil’s contstruction). In fact, I would go further and say that logic, in the rigorous sense, is an idealized mode of thought, a structural goal which we often strive for, but, as an ideal, one which we cannot consistently achieve. In this sense, logic is strikingly similar to art; with art, too, we aspire to an artistic way of thinking, but, as even the greatest artists could probably tell you, we fall short of that ideal more often than we succeed.

In fact, this similarity between logic and art is one that has been extensively commented on by a lot of brilliant people. The most famous example I can think of is G.H. Hardy’s A Mathematician’s Apology. As W.W. Sawyer says :

Hardy is very anxious to show that the value of mathematics lies in its beauty, not in its practical consequences. Real mathematics is that “which has permanent aesthetic value”.

In other words, Hardy is saying that, although advances in mathematics have helped spawn orbital mechanics, computers and countless other advances, those applications do not justify the study of mathematics in and of itself. Rather, mathematics is justified by its beauty, by its aesthetic appeal. This may seem a bit strange to those who cannot or will not appreciate the full beauty of mathematics, but I would just note that Hardy’s justification is one that we take for granted in the case of art. That is, although art may be used to advance political causes (propaganda), to introduce people to new products and experiences (advertising) or to make a gathering more comfortable (background music), we don’t justify art on these practical terms. We view art as valuable and worthy of our attention and our effort because it is beautiful.

With those similarities in mind, I think my definition of logic, as an idealized mode of thought, makes some sense. This, perhaps, roughens some of the lustre of logic, as it does not grant logic the more generalized position of being “thought about thought” or metathought, but I think it’s clear that thoughts about thought can be artistic as easily as they can be logical.

One other important consideration is the following: if logic is, as Neil suggests, the “qualification of human thought”, then what is the “qualification of logic”? That is to ask, is logic capable of qualifying itself? Certainly not completely so, but perhaps incompletely? I guess I would have to argue that it is not. I admit this is a bit more epistemological than I like to get, but I’m not at all convinced that we can achieve a rich understanding of logic and logical thought entirely through logic. I think a broader, perhaps intuitive, context is necessary in order to understand both the power of logic and its limitations. I touched on this issue, albeit obliquely, in my critique of Austrian economics, but I have to admit that I’m not really ready yet to fully flesh the topic out. Nonetheless, I think it’s something to think about.

That all having been said, I’m really glad Neil is doing this and am looking forward to his development.

February 19, 2004

Anti-clericalism for the new millenium!

Posted by Curt at 09:14 PM | permalink | 1 comment

The framework of ethics must be rid of this virulent latent sense of Marxist/Calvinist guilt which holds happiness and well-being to be somehow finite, of discrete quantities, and hence that the happiness of others and the happiness of oneself must be mutually exclusive. The upshot of this is that the concepts of duty and obligation become forms of self-negation, of self-immolation. In other words, one can only behave morally at the expense of oneself. Need I point out that this is all a hollow illusion? By this standard, the highest form of generosity is giving without deriving any sense of joy or satisfaction from it, i.e. how I feel when I pay my taxes, which can hardly be considered an act of altruism. No, the ones who are actually admired, perhaps because intution is wiser than ethics, are those who find their own happiness concurrent with the happiness of another or others, and who thus do themselves good even as they do good to others. This is neither a condemnation nor an exaltation of misanthropes, but they at least do not commit the folly of trying to win approbation through a generosity which brings them no joy, and which consequently could not win them any but the most superficial praise. Perhaps it suffices merely to echo a certain Chinese philosopher much wiser than any of us who once said that duty and personal happiness can be joined only through love.

February 16, 2004

Evolutionary Psychology

Posted by shonk at 10:25 PM | permalink | 2 comments

Having a long-standing interest in evolutionary psychology, I was pleased to come across a link to this interview with Dr. David Buss over at Improved Clinch. Therein, Buss addresses the opposition to evolutionary psychology’s findings. On the topic of political dogmatism:

A second [reaction] comes from political ideologies—people have agendas for making the world a better place, and evolutionary psychology is erroneously believed to be at odds with social change.

People think “if things like violence or infidelity are rooted in evolved adaptations, then we are doomed to have violence and infidelity because they are an unalterable part of human nature. On the other hand, if violence and infidelity are caused by the ills of society, by media, by bad parenting, then we can fix these things and make a better world.”

It’s what I call the “romantic fallacy”: I don’t want people to be like that, therefore they are not like that [interviewer’s emphasis]. The thinking is wrong-headed, of course. Knowledge of our evolved psychological mechanisms gives us more power to change, if change is desired, not less power.

On the “social constructivist” opposition (the first paragraph is attributed to Buss, but I’m assuming that’s a mistake, so I’ve changed the initials to those of the interviewer, Bernard Chapin):

BC: Under the “stranger than fiction” category, in a class I taught last semester to graduate level teachers concerning human development, all but one of them answered negatively to the statement that there is a biological basis behind many of our mating behaviors. They honestly believe that “male” and “female” are socially constructed roles. How does one combat such dogmatic views? What suggestions do you have for refuting the “social constructionist” only bias among many students?

DDB: Unfortunately, students are still being taught long-outmoded ideas that have no empirical or theoretical warrant. Evolutionary psychology has revolutionized our understanding of human mating, and many other domains as well. If you ask “what new insights and empirical discoveries have been produced by those operating in a social constructivist theoretical framework?”, you come up empty-handed. If you ask the same question of those working in evolutionary psychology, you come up with literally hundreds of fascinating empirical discoveries, generated by powerful evolution-based theories.

Eventually, the outmoded social constructivist theories will fade away, since they do not generate novel insights or important empirical discoveries. Evolutionary psychology, in contrast, is here to stay.

So stick that in your pipes and smoke it, Sigmund and Carl!

February 15, 2004


Posted by shonk at 04:33 PM | permalink | 10 comments

Serre's Linear Representations on Finite Groups

If I were a Springer-Verlag Graduate Text in Mathematics, I would be J.-P. Serre’s Linear Representations of Finite Groups.

My creator is a Professor at the College de France. He has previously published a number of books, including Groupes Algebriques et Corps de Classes, Corps Locaux, and Cours d’Arithmetique (A Course in Arithmetic, published by Springer-Verlag as Vol. 7 in the Graduate Texts in Mathematics).

Which Springer GTM would you be? The Springer GTM Test

February 14, 2004


Posted by shonk at 06:50 PM | permalink | comment

Three different sorts of maps, all fun to play with:

the Degree Confluence Project — A project attempting to get pictures of each of the spots on Earth where a latitude line crosses a longitude line.

gnodA “search-engine to find things you don’t know about.” I had the most fun with the map of literature.

KartOO — A visual metasearch engine.

January 31, 2004

More on Paleo-Marxism

Posted by shonk at 11:37 PM | permalink | 3 comments

I think Curt’s analysis of Slavoj Žižek’s “What Is To Be Done (With Lenin)?” is spectacularly dead-on. This, to me, is especially prescient:

While no one could agree more as to the vacuity and superfluousness of the meaningless choices with which we are confronted every day, especially that between the twin puppets of electoral politics, insisting that we reject them wholesale and embrace on a societal level the “real,” “dangerous” choices which lie beneath them, the sort of massive overturnings embodied by Lenin, seems to me to be an attempt to apply a parablist’s psychology to politics, a hideous monster in my opinion.

As I see it, the “parablist’s psychology” of the piece stems from the fact that Žižek’s (and Lenin’s) distinction between “formal” and “actual” freedoms has a lot to do with their frustration that people simply cannot choose the impossible. For example:

Can you no longer rely on the standard health insurance and retirement plan, so that you have to opt for additional coverage for which you have to pay? Why not perceive it as an additional opportunity to choose: either better life now or long-term security? And if this predicament causes you anxiety, the postmodern or “second modernity” ideologist will immediately accuse you of being unable to assume full freedom, of indulging in the “escape from freedom,” of the immature sticking to old stable forms. Even better, when this situation is inscribed into the ideology of the subject as the psychological individual pregnant with natural abilities and tendencies, one automatically interprets all these changes as the results of their personality, not as the result of being thrown around by market forces.

When Žižek says this, he clearly thinks that there is some third alternative other than “better life now or long-term security”. However, the reality is that no system can give people long-term security without degrading their present condition to some extent. This is no less true under socialism, though the fact that the citizenry is given no choice but to forego present wealth for future “security” makes it seem otherwise.

On a somewhat related note, when he discusses the death (if only) of State Socialism and Western Social Democracy, Žižek has this to say:

What these two defeated ideologies shared is the notion that humanity as a collective subject has the capacity to somehow limit impersonal and anonymous socio-historic development, to steer it in a desired direction.

It is in this context, I think, that it is more appropriate to ask Lenin’s fabled question: “yes, but for whom? To do _what_?” Because it is not just these two ideologies which share “the notion that humanity as a collective subject has the capacity to somehow limit impersonal and anonymous socio-historic development”; this is simply fact. For those that disagree, consider the social norms that prevent you from detailing to their face the character flaws of each person that annoys you, or the aggregate of human actions which makes the suburbs an attractive place to live. No, what distinguishes these two ideologies are two premises that underlie the final clause, “to steer it in a desired direction”.

To even talk about a “desired direction” is to imply that collective desires exist, that there is some collective consciousness existing semi-independently from individual consciousness which has its own desires, perhaps antithetical to the individual desires of its components. When I say a “collective consciousness”, I mean this in a very real sense, as something more than a mere statistical aggregate of individuals. A cynical observer (such as myself) might find a bit of irony in this mystical belief on the part of confirmed materialists, but the presence of the belief is quite real, most prosaically evident in the virtually uniform appellation of a definite article when ideologues of this stripe talk about “the people”.

The second implication underlying this notion of steering “in a desired direction” that distinguishes the ideologies of state socialism and social democracy is that the state reflects the desires of humanity, of the collective consciousness. Note that the emphasis of the above-quoted sentence undergoes a subtle shift: starting with “humanity as a collective subject”, it ends with a call to state action. One would think that a person like Žižek, so critical of current states acting against the needs and desires of people, would recognize that any state is necessarily exclusive and, as such, cannot capture the totality of human desire, that any “desired direction” embraced by a state can, at best, be a poor approximation of what “the people” (to use the usual nomenclature) really want. And that, even were this not the case, the only distinction, fundamentally, between state action and any other action lies in the legal use of force, which means that even if “the people” really do desire what the state supposes they do, they may very well not value that desire above the effects of the application of force necessary to achieve it.

Which brings us back to where we started: one cannot do what is impossible. Much as I am sure we all desire it, there is never a choice that does not have its associated costs, no matter how much thinkers like Žižek talk about undermining “the coordinates of the existing power relations”. That is not to say that many of these “coordinates” should not be undermined, but rather that altering the existing power structure will not make the impossible suddenly possible.

Paleo-Marxists revived!

Posted by Curt at 03:27 PM | permalink | 2 comments

I find much fodder for further intellectual rancor in what I suppose purports to be a “re-valuation” of freedom along Leninist lines, with a title that plays off the title of a pamphlet by Lenin. The author and I start off, at any rate, on similar ground. His implicit question, which will find much sympathy at least among the angst-ridden, is the question as to how it is the case that, in a society in which no appreciable political or social limitations constrain us from achieving material and social prosperity that happiness is not more wide-spread. Rather than cavilling as to whether limitations do in fact constrain those whose desires lie outside certain societal norms, kleptomaniacs or serial killers for example, suffice us to re-open the question as to whether free choice can really be equated with happiness.

Lenin, for one, formulated a theory which separated “formal” from “actual” freedom, i.e. tried to make people aware that the simple ability to choose between several alternatives did not necessarily constitute true freedom, because the finite number of options presented to them itself represents a limitation on their freedom. To cut through all the gibberish in the article, the author’s point, quite simply, is that while this distinction in Lenin’s particular case may have been entirely self-serving, a ploy by which one could strip a people of all of their personal liberties in the name of freedom, his wider point has valid application in our society as well as his. And indeed it is not a false distinction to contend that the ability to choose between certain alternatives may not constitute freedom in the wider sense. However, it does not follow that such incomplete freedom necessarily equates to unhappiness, nor that the converse, total lack of constraint, would produce happiness.

In fact, I think the author has a good deal of sympathy for Lenin’s ultimate goals, and recognized that to achieve them would require a good deal of destruction. But this idea should sound the alarm for the rest of us. If the “actual” freedom propounded by Lenin requires the death of millions, not only the means should be criticized but also the goal. If the obstacles to existential freedom are the lives of so many, what kind of an ideal is this? Of course this is the root of my detestation for idealists of any stripe: for them, like mystics and Platonists, this existence we inhabit means nothing, is only a shadow obscuring the ideal, and hence the separate, actual existences of all the many peoples of the world can ultimately not be of the slightest concern or relevance to them, for they are simply the disappointing precursors to the ideal.

Hence, the very notion of “actual” freedom has a whiff of madness lurking upon it, particularly for those who remember Herder’s observation that a man holding a gun near a tower packed with explosives on a dark and stormy night has strange thoughts. Most of us are not disappointed that we ultimately do not realize these wild fantasies and desires, but rather are in the end relieved that something held us back. Nothing is easier to enjoy than the fate to which a man has resigned himself. I do not mean to devalue the concept of freedom entirely, but ultimately it is simply an abstaction with no corollary in the real world, as anyone who has chosen not to die will surely have come to understand (while technically true, I do not accept for a second the rationale behind the sophism that one cannot really choose that which is not possible). While the article may end on a resigned note, speaking of the inevitable limitations on human freedom, that seems to me no more than an emotional intermezzo until the next sensational idelogy plucks up his dreams of immortality again. While no one could agree more as to the vacuity and superfluousness of the meaningless choices with which we are confronted every day, especially that between the twin puppets of electoral politics, insisting that we reject them wholesale and embrace on a societal level the “real,” “dangerous” choices which lie beneath them, the sort of massive overturnings embodied by Lenin, seems to me to be an attempt to apply a parablist’s psychology to politics, a hideous monster in my opinion. We should realize in the end the abstractness of freedom; it is what Hegel called a regulative rather than a nominative end, i.e. a standard to hold one’s own conduct to, but not a real possible mode of human existence.

January 16, 2004


Posted by shonk at 04:12 PM | permalink | comment

For those who feel left out of the address bar icon scene, there’s now a handy online favicon.ico generator. Just thought you might want to know.

December 15, 2003

Information! Cheap!

Posted by shonk at 08:36 PM | permalink | comment

In his Washington Post article, Everett Ehrlich synthesizes a simple observation made by Ronald Coase to explain both shrinking firms and the ascendancy of Howard Dean: “The cost of gathering information determines the size of organizations”.

Just as the abundance of cheap information in modern times is cutting the old monoliths like GM down to size, cheap information in the form of blogging and internet campaigns is allowing a relative outsider like Dean to make inroads into (or even “take over”) the monolithic Democratic party.

Though I doubt he intended to make it generalizable, this statement may be more prescient than Ehrlich realizes:

But the Internet doesn’t reinforce the parties — instead, it questions their very rationale. You don’t need a political party to keep the ball rolling — you can have a virtual party do it just as easily.

As I thank Samizdata for the link, I’d like to commend you to a comment made there by “mad dog” (that you, Miles?):

An interesting insight on a developing theme. Reading it on Samizdata saved me all the effort of searching that out myself. Just like the article says…

Indeed, the low transaction costs available online allow small organizations such as myself (heh) to consume and produce information that is more widely available than ever before.

November 26, 2003

Spam Me? Oh, Spam YOU!

Posted by shonk at 02:09 AM | permalink | comment

By now, I'm sure most of you have heard of the "spam rage" incident in which a guy threatened to kill and/or torture people working at a company that was sending him spam. Naturally, a lot of people have mixed feelings about this; although the ice pick and anthrax threats were a bit over the top, many identify with the guy's anger (this Samizdata post and associated comments provide a good example).

Those who think the solution to every problem is another law will be gratified to know that Congress just approved an anti-spam bill. One has to imagine, given that such proposals had been languishing for six years, that the "spam rage" incident may have had some impact on the bill's passage.

Of course, I'm highly cynical of anti-spam legislation for several reasons. First, because any time any form is speech is regulated, it makes me nervous. Though I find panhandlers, street-corner preachers and billboards annoying, I don't think they're doing anything fundamentally wrong and I'm not convinced spam is fundamentally different from a billboard or a request for spare change. The only major difference, as I see it, is that spam manifests itself onto my property (my computer), whereas panhandlers and billboards don't come inside my apartment. This is certainly an important difference, but at the same time I don't think inboxes can or should be guarded from intruders in the same way that one's residence is.

That, of course, is a debatable point, but what's not debatable is that this new law, once it's signed into law by GW in December, will not end or even seriously curtail spam. I mean, the DMCA's been around for a while and, last I checked, Kazaa wasn't going anywhere (or, if it is, it's because of competition from iTunes and Napster, not due to the DMCA). Instead, the solution to spam can only come from people changing the way they read e-mail in a way that makes spamming more costly than it is remunerative.

This is a point made recently over at Catallarchy, wherein one proposed method of making spam more expensive (literally) is that of David Friedman: each person would set a (probably small) price that people not on their "white list" would have to pay to send them e-mail. In other words, digital postage. The virtue of this approach is that, if accepted broadly, even very tiny per e-mail "postage" charges would make spamming unprofitable, while not greatly affecting the average e-mailer.

Another solution is that of Bayesian filtering which, despite the awkward name, is basically an algorithm which scans each incoming message and, based on a statistical analysis or its contents, decides whether it is spam or legitimate e-mail. The nice thing about Bayesian filtering is that it's completely personalized; its analysis is entirely based upon each person's identification of what is spam and what isn't. For example, I've never once received a legitimate e-mail containing the word "mortgage", so my Bayesian filter would key on that word in any incoming e-mail and (almost assuredly correctly) identify the e-mail as spam, whereas a realtor's filter would probably be inclined to accept e-mails containing the word "mortgage" (though it would likely still be able to differentiate the wheat from the chaff based on other clues). The disadvantage to this approach is that it requires a bit of time on the front end, but, as pointed out in the above-linked article, it is remarkably effective and can make producing spam that actually gets seen quite difficult.

There are, of course, other spam-killing schemes out there, but these two are the ones most appealing to my sensibilities. The first requires more of a global approach, as it would only be effective if broadly used, whereas the second can be effective for you even if nobody else adopts it. Either, though, has much higher potential, in my view, than legislation of making spam, if not a thing of the past, at least no more annoying than third-class mail.

Of course, the death of spam would dry up the source material of the surprisingly entertaining impromptu art of spam-baiting. Art, as they say, is the sister of misery.

November 19, 2003

More Geek Talk

Posted by shonk at 02:19 AM | permalink | comment

Petya says "geeks rock! more geeks!", so I'm obliging (and using the opportunity to dredge my Geek Talk post up from the archives).

First off, I'm somewhat amused by the etymology of the term:

Geek is actually a very old word. It is a variant of geck, a term of Low German/Dutch origin that dates in English to 1511. It means a fool, simpleton, or dupe.
Later, in 19th century American usage, the connotation of offensive and undesirable is added and then, in 1928, it starts being used to describe "a sideshow performer who bites the heads off chickens or snakes." How "geek" came to mean what it does today, I have no idea (man I miss having free access to the OED).

If you're looking to broadcast your geekdom to the world, you definitely need to start adding Geek Code to your e-mail signature. Of course, if you ever e-mail non-geeks, you might want to only use that sig selectively. The first time I saw a geek code block on someone's message board signature, I was mightily confused. And then immediately started trying to decrypt it instead of just googling. Which probably says something about me.

If your decryption skills aren't up to snuff, you can, of course, cheat.

As far as I'm concerned, it doesn't get much geekier than this: Today, a friend of mine referred to this Strong Bad song as a "canonical techno song". Without planning to.

Right now, I just know you're saying "huh?" You have to understand, "canonical" is a word very near and dear to the mathematician's heart. It is simultaneously very specific and very general and, as one professor said in lecture earlier this year: "There's no good definition for 'canonical'. You can't be taught how to use the term. But does have a very concrete meaning and once you've been around mathematics enough, it's easy to tell what is canonical and what isn't." Somehow, I don't think he was talking about techno songs.

Of course, it gets really scary when you start using terms like "isomorphic" in everyday speech. And yes, I've heard it done.

November 03, 2003

I Proved all Odds are Prime - with Inductive Reasoning

Posted by shonk at 04:33 AM | permalink | 1 comment

Just so nobody forgets what a geek I really am, let's talk about induction. Now, what I want to do is to try to explain the differences between "mathematical induction" and "philosophical induction", otherwise known as inductive reasoning. Just allow me to apologize in advance for the irregularities in my explanation.

Just to start things out concretely, here are the relevant dictionary definitions of the two terms:

3. Logic.

a. The process of deriving general principles from particular facts or instances.

b. A conclusion reached by this process.

4. Mathematics. A two-part method of proving a theorem involving an integral parameter. First the theorem is verified for the smallest admissible value of the integer. Then it is proven that if the theorem is true for any value of the integer, it is true for the next greater value. The final proof contains the two parts.

Now, the concept of inductive reasoning is pretty straightforward, as it is basically a natural extension of how we view the world. Basically, when you're reasoning inductively, you're observing that a lot of things in a certain class of things have a particular characteristic and then concluding that all things in that class must have that characteristic. For example, if I notice that every swan I've ever seen is white and, based on that information, conclude that all swans are white, I'm reasoning inductively. This is, in large measure, the way in which the natural sciences work. The natural scientist observes that everything he drops falls at the same rate and then concludes that all objects fall at that rate, even the ones he's never dropped himself. This sort of reasoning seems to derive from Aristotle, who arrived at First Principles by generalizing from experience.

Now, this sort of reasoning has its merits, but it is also limited. The problem is that the inductively reasoned generalization may be true, but it is not necessarily true. There's always the possibility that there is some example that would falsify it, if only one had observed it. For example, if I am the first one at a meeting and notice that each of the next eight people to come into the room are men, it would be tempting to conclude, by inductive reasoning, that the next person to enter the room will also be male. However, this does not mean that the ninth person will necessarily be male; it may be that the chair of the meeting is a woman. Similarly, although a child in Utah may only have Mormon friends, it would be folly for him to conclude that all people are Mormons. It is because most of its conclusions are based on this sort of inductive reasoning that physics is constantly being revised; as better methods of observation are made possible by technological advances, physicists are able to observe phenomena that were inaccessible to their predecessors and therefore are not covered in the old theories.

This uncertainty would prove fatal to mathematics, were it to attempt to use an un-modified form of inductive reasoning. In fact, mathematicians are known to rather derisively refer to this sort of argument as "proof by example", which is really no proof at all. However, by it's nature, the principle of induction can make problems easier to solve, so mathematics has developed a modified form of inductive reasoning called mathematical induction. There is some dispute over who first invented the technique: some argue that the first proof to use it was Francesco Maurolico's proof that the sum of the first n odd integers is n2 from 1575; others that DeMorgan invented the term and made it precise in 1838; still others that Dedekind was the first to formalize the principle in the modern sense. Probably all of these are right in large measure, since use differs from definition differs from formalization; one could argue that the idea stretches all the way back to Euclid.

So, aside from all this history, what the hell is mathematical induction anyway? Basically, the notion is that if you're dealing with the natural numbers (0, 1, 2 and so on; the non-negative integers), then if you have some proposition that holds for 0 (usually denoted by P(0)) and if, for every non-negative integer n, it is true that if you assume the proposition holds for n - P(n) - then it holds for n+1, then the proposition holds for every non-negative integer.

That's a bit technical, so let's see if I can make it a little clearer. Basically, the idea is this: I want to show that some property holds for all non-negative integers (in practice, the argument could be for all positive integers, or all integers greater than 50, or whatever, but let's stick to the simplest case). First, I demonstrate that it holds for the smallest non-negative integer, 0. The natural inclination might be to then show it holds for the next smallest integer (1), and then for the next-smallest (2), and so on. But it's clear that if I attack the problem in this way, I'll spend a very long time not getting very far, because no matter how big I get, there will always be bigger integers where my property might fail. Even if I am really dedicated and spend many years carefully showing, one-by-one, that my property holds for each integer up to 5,000,000, there is no guarantee that it will hold for 5,000,001, just as there was no guarantee that the next person to walk in the door of my meeting was a man. So, instead of going one-by-one, I say to myself "What if my property holds for some integer? Does that imply that it holds for the next biggest integer?" If I think of the integers as lying on the number line in the usual way, this question basically asks "What if I randomly put my finger on some number on the line and assume the property holds for that number? Is there some way to show that this assumption implies that the property must necessarily hold for the number directly to the right of my finger?" If I could show this, then I will have a proof for the following reason: I already know the property holds for 0, so (since I could have randomly picked 0 when I stuck my finger on the line) I know the property must hold for 1. Then, since I know the property holds for 1, it must hold for 2, since I proved that if it holds for any integer then it holds for the next-smallest interger.

It is precisely in this way that we can think of our "induction engine" running along all the numbers and proving our proposition. The Wikipedia makes a nice analogy with dominoes:

if you have a long row of dominos standing on end and you can be sure that

1. The first domino will fall.

2. Whenever a domino falls, its next neighbor will also fall.

then you can conclude that all dominos will fall.

The reason is this: you know that if a domino falls, it will knock over its next door neighbor, and you know the first domino will fall, so you know the first domino will knock over the second domino, which will knock over the third, which will knock over the fourth, and so on, until all the dominoes have fallen down. It doesn't matter how many dominos we have, they're all going to fall down. We could even have infinitely many dominoes and still be sure that they all will fall.

This isn't the most obvious concept in the world, but it is an extremely elegant one. The beauty of mathematical induction is that it allows you to prove that a number has some property even if you really don't know anything about the number itself. No matter how nasty the number, if you know that it's predecessor will "knock it over" if struck in just the right way, then you can be sure that the number will be "knocked over" simply by pushing the some really small number, like 0 or 1, and then letting the chain reaction take place.

In point of fact, though this is beyond what I want to try to explain, the principle of mathematical induction works not just on non-negative integers, but on any set that more-or-less looks like the non-negative integers. So it really is a powerful tool.

Unfortunately, philosophic induction (inductive reasoning) and mathematical induction use the same word to describe similar, but by no means identical, ideas. For this reason, I usually call philosophic induction "extrapolation", since it extrapolates known results into unknown territory. But I'm well aware that philosophers don't like people like me using different terminology, so I imagine we'll be stuck with this discrepancy until we stop listening to philosophers. Not that most of us do, anyway (yeah, taking cheapshots both at Aaron and my brother with that one).

October 31, 2003


Posted by shonk at 02:38 AM | permalink | comment

In my analysis class today, the professor was lecturing on the Hann-Banach Theorem, which states that if you have a bounded linear functional defined on a subspace of a normed linear space, there is an extension of the functional to the whole space that preserves the norm. Don't worry, I'm not going to go into the theory or its applications, but I did want to comment on something the professor said in class, namely: "The idea of linearity is deeply ingrained in us". Now, he was specifically commenting on our natural inclination to try to deal with linear functions, since they're so much nicer to work with. For example, the first time we encounter something like (x + y)2 in algebra, we want it to be equal to x2 + y2 instead of the clumsier x2 + 2xy + y2. This reasoning, aside from conforming to our aesthetic sensibilities, is also founded on sound thinking, since mathematics has far more tools to deal with linear functions than other varieties.

However, I got to thinking about his statement and I think it may be more broadly applicable than he intended. The idea of linearity is, indeed, deeply ingrained in us. The most challenging books or movies are often those that reject the traditional linear forms of narrative; they are challenging in large measure precisely because they stray from that ingrained idea of linearity. The linearity that characterizes our perception of time makes clichés like "time flies when you're having fun" both memorable and slightly ridiculous. We like to think that time ticks inexorably by, each second lasting precisely as long as the last. This linearity applies to our perception of space as well: we see the earth as flat, we like our roads straight and our buildings upright, we figure distances as the crow flies.

In a way, we might see the 20th century as the century of non-linearity: the Theory of Relativity debunked the notion that time and space are linear or flat, quantum mechanics and chaos theory question the very possibility of linear determinism, writers like Joyce, Nabokov, Amis and Márquez explored non-linear narratives, as did movies like "Un chien andalou" and "Memento", networks and hyperlinks changed the ways we learned, read and interacted. Certainly, non-linearity wasn't unique to the 20th century - strong strains of it are present in Newton's (and Leibniz') calculus, impressionism, the work of Diderot and Sterne, etc. - but I don't think it's a stretch to say that it has been most prominent in the last century or so.

I admit that this isn't an important observation, but I guess I just find it fascinating that art, culture and science are all, seemingly in tandem, redefining themselves in non-linear terms. And I think that part of the reason the adaptation to these new terms is so difficult for so many is precisely because linearity is so deeply ingrained in us. Heady thoughts indeed for a Thursday afternoon analysis course.

Bookkeeping Note: I've added a new section to my links, "Literature Online". I've collected links to some of my favorite books and poems under that heading. Book titles are linked to full-text versions, novelist names to their biographies and poet names to a collection of their poetry. Enjoy.

September 26, 2003

Strange Beliefs

Posted by Curt at 02:02 PM | permalink | comment

I don't know why (certainly not procrastination--I'm done with class for the week), but I am going to wade a little ways into this matter despite knowing far less about the issue than my brother. However, two things seem apparent to me. One is the current and ongoing superstition among scientists that they are somehow immune to superstitions, which seems more and more ludicrous to me every time I hear of some new fad theory like superstring theory or, apparently, "sync." These generally fall either under the category of "so obscure that they cannot be understood, let alone proved" as in the case of superstring, or a series of blindingly obvious observations all swept up in some grandiose-sounding concept, as seems to me to be the case with this new theory of "sync." The most cuttingly insightful example of this grand theory which is supposed to wrap together biology, chemistry and physics, et. al is that fireflys can coordinate their flashing by observing each other? Imagine! At root, are any of these sort of theories really more substantially grounded than Leibniz's theory of monads? Anyway, I think the more important point is that these kinds of wildly presumptuous theories are more or less inevitable when the natural world is essentially perceived through a mathematical lens, because the discipline is predicated upon discrete, quantitative relationships between all the elements. And empirically also, talking about "spontaneous order" and logic in nature is to some extent simply tautological, as all of our empirical concepts of order and reason are based upon the the structure we perceive in nature and human society. Or as Hegel said back in 1821: "What is rational is actual and what is actual is rational...since rationality...enters upon external existence simultaneously with its actualization." Far from rendering math obsolete, this "sync" theory seems to demonstrate to me the current dominance of logical systemizing over the particular subject to which it is applied. I think in general we pay far too little attention to the ways the contents of our thoughts are shaped by the means of communication we use. In my opinion, the content of our conscious thoughts is simply the articulation of our inchoate feelings via language, and if anything divides our conscious selves from other animals, it must essentially be language itself. Anyway, the point I am making is that when a physicist uses numbers as a language to describe natural phenomena, that description will be limited and defined by the structure of the language, of math, just as an English-speaker or a Russian-speaker are constrained in the contents of their thoughts by the language of their thoughts. Therefore, I do not honestly understand how anyone could hold that math is inapplicable or unuseful in studying something like this theory of "sync", whatever it is, which is essentially based on the observation of numerical relationships in nature, i.e. is mathematical in nature. Anyway, I am sorry for ranting for so long, but I find myself constantly confounded by the dogmas in our society of those who argue about math as if it were some sort of philosophical object or concept rather than what it is, a sort of symbolic language which imposes a filter through which everything which it is used to describe must be perceived. I believe this is true even in the theoretical realm, as number-systems could not simply have been dreamed up a priori as a speculative-imaginative exercise, but rather serve to represent and correspond to quantities of ojects observed in the world.

September 25, 2003

Spontaneous Order

Posted by shonk at 11:08 PM | permalink | comment

That being said, I take some objection to Doss's comments and some of those made by Tim in response to the post. Quoth Doss:

So if those researching Sync want to have success when it comes to thinking, subjective creatures, they'd be best advised to, as Alfred Marshall suggested, "burn the mathematics."

And Tim rather sarcastically adds:

Also, I'd like to see the equation that represents humans evaluating a widget based on their subjective preferences. Please email me that when you get a chance.

In both cases, I find myself more in agreement with another commenter on Doss's post, Paul Philip, who contends:

I disagree with your automatic dismal of mathematics. Mathematics is just a tool, the problem is with the application. Alfred Marshall once said that biology was a better method than physics for the study of economics, the problem was that biological toolkit was too incomplete. Economists imposes the metaphor of a machine on economic activity because the toolkit was more complete at the time. The real problem is that the machine metaphor is very limited. There are problems in the science of self-organizing systems which require some complex mathematics. The results will be useful to the degree that the model encoded in the math fits with reality.


However, there are problems where math is the right tool. (Again, the problem in neoclassic economics ISNT the use of math, it is the limited metaphor imposed by the math - it is the inappropriate use of tools).

In fact, I might go even further than he does. First of all, I'd like to point out that, according to Strogatz, the study of spontaneous order is a subset of complexity theory, an offshoot of chaos theory. Now, I'm by no means an expert on chaos theory, but it is, without doubt, a field with heavy-duty mathematical content. Absent the tools of statistics and, oddly enough, topology, chaos theory could hardly have gotten off the ground (an excellent introduction to the ideas of chaos is Ian Stewart's Does God Play Dice?; note that even though it is introductory and intended for the non-specialist, Stewart's book has heavy mathematical content and an even stronger, invisible mathematical foundation -- and, of course, Stewart himself is a mathematician). So, to me, the notion that a study of spontaneous order can divorce itself from mathematics is absurd. Furthermore, I just want to point out that what is being called "mathematics" in these objections to mathematics is, primarily, statistics and calculus. Mathematics is a much broader field than these two particular areas and many would argue that statistics, while it uses mathematical tools, is actually a separate field. The fact that Statistics and Mathematics are different departments in most major universities is an exemplum of this idea.

Incidentally, I'd also like to point out that the Austrians' claim to be divorced from mathematics is totally absurd. Now, I am by no means an expert on Austrian economics, but, although the Austrians may dispense with the rather tedious and twisted equilibrium calculations that are the trademark of neoclassical economics and econometrics, I would contend that the Austrian approach is actually very mathematical. In fact, as noted by Philip above, neoclassical economics is actually more similar to physics, in my view, than it is to mathematics. After all, mathematics is decidedly not empirical. Mathematicians and Austrian economists, as I understand the field, argue a priori, starting with certain axioms and hoping to deduce certain theorems from those axioms. In fact, this deduction takes place under the auspices of logic which, though not always recognized as a part of mathematics, was certainly demonstrated to be equivalent by Russell and Whitehead. In any case, I think both Austrian economics (despite its flaws) and mathematics can be seen as a kind of meta-system, a way of thinking rather than a particular approach.

And, as I read it, evolutionary psychologists like Dawkins (ev. psych. is closely related to spontaneous order) do something similar. For example, in The Selfish Gene, Dawkins is largely examining certain phenomena (like charity) and and then trying to postulate simple principles which, if adhered to, would eventually evolve into the complex observed phenomena. These principles, though not axioms in the mathematical sense, have certain similarities.

My point is not to demonstrate that the study of spontaneous order is a mathematical discipline, nor that it should be. Rather, I just want to make the point that it has certain similarities to mathematics and, of course, will necessarily need to use mathematical tools in many instances. In fact, though I admit to not knowing nearly enough to be able to have any insights, it seems like mathematics, especially areas of study like graph theory and networks, might be able to shed some light on some of the applications of spontaneous order mentioned by Strogatz:

"In addition to the shear wonder of knowing why crickets chirp in sync or how the cells in your heart keep in step for three billion beats in a lifetime, there are applications in medicine and communications. For example, maybe you want to understand cardiac arrhythmias or how the brain works. There are also applications in super conducting and wireless communications,"

Is it clear yet that I'm procrastinating?

September 19, 2003

Moving On

Posted by shonk at 12:07 AM | permalink | comment

Let me tell you, grad school is a humbling experience. Even when you're pretty on top of things, it's a lot of work. Which, in math at least, means a lot of staring at a sheet of paper covered in undecipherable symbols, wondering what the hell you were thinking. Or staring at the chalkboard in your office, subconsciously hoping that if you stare intently enough, the symbols will rearrange themselves into the proper order.

But, on the other hand, it's also exhilerating. After all, someone came up with these things, figured them out, passed the knowledge along. Which is really pretty extraordinary when you consider just how abstract it all is.

September 12, 2003

Geek Talk

Posted by shonk at 06:49 PM | permalink | comment

Let's face it, when you're studying mathematics in graduate school, you tend to acquire something of a geek vocabulary. You start getting excited about things like diffeomorphisms, quasi-manifolds and R-modules. Actually, though some of the new terms you pick up are pretty unusual, what will really throw you for a loop is the specialized meaning attributed to otherwise commonplace terms. For example, words like continuous, ideal, normal, integral, field, group, extension, map, graph, module, knot, domain, range, compact, open, closed, picture, braid, and smooth (among many others) all have specialized mathematical definitions that may or may not relate to their usual definition in some way.

And even when they do denote something similar to what we would expect, such is hardly obvious from the definition without specialized knowledge. The knots that knot theorists study, as an example, are recognizably similar to the normal conception of a knot, but they are defined to be "continuous embeddings of the circle in the 3-sphere".

That's really the uninteresting case, though. After all, some of these definitions seem totally arbitrary, leading to this inevitable comparison in junior-level algebra: "Ideals are basically just like normal subgroups, but for rings". I wish some confused English major had wandered into class just at the moment I heard that phrase sophomore year, because, as the professor said it, all of us in the class were nodding our heads, saying, "Oh, okay, that makes sense". The look of horror on the poor lad's face would have been priceless.

All of this leads me back to a conversation I was having the other day about language. I won't get into the details, but at one point I asserted that what we call things affects how we think of them, asking the hypothetical question "Would Social Security get the same kind of funding if it were called 'State-mandated Ponzi Scheme?' " But I don't necessarily think that the same reasoning applies to totally abstract concepts like those studied in mathematics. Would it make any difference for comprehension if we called it a "norring" rather than an "ideal"?

I, for one, am glad that mathematics has co-opted so much everyday vocabulary. If it hadn't, my Russian topology professor could never have said this yesterday: "If it's infinitely smooth, we just say it's simply smooth". Henceforth, then, I demand that we call Barry White simply smooth, both because that's the preferred nomenclature and because it sounds less intimidating (though my girlfriend asserts that Mekhi Phifer is much smoother than Barry White).

August 07, 2003

A Geek Tragedy

Posted by shonk at 11:11 PM | permalink | comment

It has all the elements of a classic tragedy: well-meaning hero, tries to do good but ultimately destroyed by his own fatal weakness. In this case, we're talking about the finger callouses sustained by my girlfriend from spending too much time typing on her laptop. Which is melodramatic in and of itself, because fingertip callouses are like carpal tunnel lite. They're the Michelob Ultra of repetitive stress disorders. And yes, I'm jealous because the only callouses I have are from spending 23 hours a day horizontal (but usually clothed, sadly).

That last, by the way, is called self-depracation, and some people are against getting cheap laughs from it. Since pretty much my entire comedic repertoire is comprised of self-depracating jokes, you can understand why. Of course, I prefer to think of myself as fulfilling this bit:

The poetics of personal failure have failed. We msut [sic] make them work again.

Of course, I'd imagine it would be pretty hard for "we" to make personal failure work. I mean, wouldn't personal failure be an individual, not a collective, thing? That, and personal failure already reached its apex with Fitzgerald. And pedophiles.

But then there's the other side of the story. Like, for example, "In Praise of Self-Deprecation":

The buzzard has nothing to fault himself with.

Scruples are alien to the black panther.

Piranhas do not doubt the rightness of their actions.

The rattlesnake approves of himself without reservations.

The self-critical jackal does not exist.

The locust, alligator, trichina, horsefly

live as they live and are glad of it.

The killer-whale's heart weighs one hundred kilos

but in other respects is light.

There is nothing more animal-like

than a clear conscience

on the third planet of the Sun.

And that's a good point, too. If you can't make fun of them, have you really come to understand your flaws? I say no. And if you don't understand your flaws, then you're cruising for a geek tragedy of your own, buster.

Plus, self-depracation can be pretty damn funny. Like Clarence Thomas, a Yale Law School grad, having a sign on his bookshelf saying: "Save America: Bomb Yale Law School" (from here). Of course, the tragedy of taking yourself way too goddamn seriously can be funny, too. Just see the Michael Teachings.

Why, you may be asking, did I google "self deprecation" and then write a post stringing together the funniest sites that came up? Because I'm pathetic and have nothing better to do than google for random phrases. Well, that's not really true (stop that snickering, you). Mainly it's because I'm dreading going to bed. You see, I'm getting up at 5:30 tomorrow morning to start the 1700 mile drive from Boulder to Philadelphia and, since I went to bed around 4:00 last night, I know I won't be able to sleep until well after midnight. So why lay in bed tossing and turning and unable to sleep when I could be filling my brain with useless information and then passing it on to unsuspecting (and largely fictional) readers? And I'm just tired enough to not recognize my incipient incoherence. Which is pretty much mandatory for blogs, isn't it?

And yes, the above means this may be the last update for a few days, or at least until I find an internet hookup somewhere on the road in middle America. Which is likely; I'm sort of like a heroin junky who always manages to get another fix. And the detox is killer.