Archive for the 'Geek Talk' Category

Triple linking numbers, ambiguous Hopf invariants and integral formulas for three-component links

Inverse image of a regular value

Dennis DeTurck, Herman Gluck, Rafal Komendarczyk, Paul Melvin, David Shea Vela-Vick and I have just put our paper “Triple linking numbers, ambiguous Hopf invariants and integral formulas for three-component links” on the arXiv.

The basic background for the paper is as follows. In 1958 the astrophysicist Lodewijk Woltjer was studying the Crab Nebula when he discovered that there is a certain quantity which is constant for a force-free magnetic field in a closed system (he had argued in an earlier paper that the magnetic fields in the Crab Nebula are force-free). This quantity turns out to be important in, e.g., plasma physics. H. K. Moffatt coined the term “helicity” for this quantity and suggested that it measures the extent to which the field lines wrap and coil around each other.

Vladimir Arnol’d made this more rigorous in his 1973 paper “The asymptotic Hopf invariant and its applications” (which doesn’t appear to be available online) in which he demonstrated that helicity can be thought of as an “average asymptotic linking number”. To explain what that means, I need to digress for a moment into knot theory.

For a mathematician, a knot is just a piece of string where the two ends have been glued together. This can be done in the obvious way or in not so obvious ways. Here’s Wikipedia’s table of all the topologically distinct knots with up to 7 crossings (topologically distinct means you can’t get from one to the other without cutting the string or passing it through itself):

Knot table

Knot theory is an incredibly rich field in mathematics which is fundamental to the study of three- and four-dimensional spaces and which has applications to everything from shoe-tying to quantum computers.

A collection of two or more knots is called a link. The simplest non-trivial link is the Hopf link:

Hopf link

Here’s a slightly more complicated link (which I drew for this paper):

Trefoil linked with unknot

The simplest numerical description of a link is its linking number. I don’t really want to get into the precise definition of the linking number, but it’s easily illustrated by the following three examples. First, a link with linking number zero:

Linking number 0

Linking number 1:

Linking number 1

Linking number 2:

Linking number 2

Now, getting back to helicity, Arnol’d said that the helicity is an “average asymptotic linking number”. What does the “average asymptotic” business mean? Well, helicity is a property of magnetic fields (or, more generally, of vector fields). Given a magnetic field, you could put two charged particles in the field at different points. Since they’re charged, the two particles will start moving, tracing out two paths (called orbits of the field). If you keep track of the paths for a long time T, you’ll get two long curves in space; close them up and you’ll have two loops in space. Now, a loop in space is just a knot, and two knots form a link, so there’s a linking number between these two loops. If you let T go to infinity and take an appropriate average, you get the average asymptotic linking number of the two orbits, which Arnol’d tells us is equal to the helicity of the field.

This is all very nice, but there’s a slight problem. Helicity is a useful quantity in plasma physics in part because it provides a lower bound for the field energy (this was also proved by Arnol’d). Any magnetic field has an intrinsic energy which has a tendency to decrease as the field evolves toward equilibrium. If the field has non-zero helicity, then you know that the intrinsic energy of the field can’t decrease below a certain amount related to the helicity. This means that, if you know the helicity of a field, then you can determine what that field’s equilibrium energy will be.

The problem is that this doesn’t go both ways: having zero helicity doesn’t necessarily mean that the field can relax to a zero-energy state. So the question, posed by Arnol’d and Boris Khesin in their book Topological Methods in Hydrodynamics, is this: are there “higher-order” helicities which would kick in when the ordinary helicity is zero and provide lower bounds for the field energy?

The problem, then, is to come up with a sensible definition for a higher-order helicity. Recall that Arnol’d showed that the ordinary helicity is an average asymptotic linking number. There is a beautiful integral formula which computes the linking number of two closed curves discovered by Gauss back in 1833, known as the Gauss linking integral (brief aside: see the papers by DeTurck and Gluck and Vela-Vick and me for generalizations of the Gauss linking integral). Using Arnol’d's approach, it’s fairly straightforward to derive the formula for helicity from the Gauss integral.

That works great for the ordinary helicity, so one might hope something similar will work to get to higher-order helicities. Remember that I said that the linking number is the simplest numerical description (more precisely: topological invariant) of a link, but it’s certainly not the only one. In fact, the linking numbers are useless for one of the simplest three-component links, the Borromean rings:

Borromean rings

The salient feature of the Borromean rings is that it’s a non-trivial link (meaning it can’t be pulled apart without breaking one of the components), but deleting any one component causes the whole thing to fall apart. The linking numbers between components are all zero, so you need a more sophisticated measure than the linking number to describe what’s going on.

This measure was provided by John Milnor in his senior thesis(!) in which he classified three-component links up to “link-homotopy”. Milnor came up with a new invariant which I’ll just call μ (though the μ usually comes with various decorations). The definition of μ is actually quite unpleasant, but it is equal to 1 for the Borromean rings and 0 for the trivial three-component link.

Now, by analogy with the Arnol’d approach for ordinary helicity, you might hope that a suitable “average asymptotic μ invariant” would give a higher-order helicity. However, in order to get a useful formula for such a higher-order helicity, you need some sort of integral formula for the μ invariant which is analogous to the Gauss integral. That’s one of the results in our paper (which I’ve finally got back around to mentioning): we give an integral formula for the μ invariant in the cases where that makes sense.

To get there, we related Milnor’s μ invariant to another topological invariant, the Hopf invariant. The basic idea is this: any three-component link is fairly naturally associated to a map from a space called the 3-torus (which naturally lives in 4-dimensional space) to the 2-sphere (think of the surface of the Earth). Such maps were classified (up to homotopy) by Lev Pontryagin in a 1941 paper; the key invariant defined by Pontryagin, denoted by ν, is a generalized (and somewhat ambiguous) version of the usual Hopf invariant and is typically called either the Hopf invariant or the Pontryagin-Hopf invariant.

Our main result is that the μ invariant of a three-component link is equal to half the Pontryagin-Hopf invariant of the associated map. We have two different proofs of this, one purely topological (including the picture at the top of this post) and one more algebraic (following a key insight of Nathan Habegger and Xiao-Song Lin).

Okay, that’s probably not enough detail for the mathematicians and probably way too much for everybody else, so I think I’ll stop here.

Higher-dimensional linking integrals

My first research paper, “Higher-dimensional linking integrals” (co-authored with my office mate, David Shea Vela-Vick), just went up on the arXiv, which is the most popular preprint server for mathematics, physics, computer science, and a few other related fields. We haven’t decided where to submit it yet, but hopefully we’ll get that figured out in the next few days. If I have some time this week, I’ll try to write up a short, non-technical summary to post here.

Does this make me a polymath?

In the spirit of Curt’s post from April and my own post from last November, here’s a rundown of a few of the things I’ve been reading the last few weeks:

  • God’s Debris, by Scott Adams. Billed as a thought experiment masquerading as fiction, the Dilbert creator’s first foray into “serious” writing is kind of silly. The entire book consists, basically, of a near-omniscient old man questioning his naïve interlocutor’s assumptions about the universe. It raises some legitimate questions, but provides no really satisfying answers, sort of like a late-night discussion between stoned philosophy majors. In fact, that may well have been Adams’ inspiration. On the plus side, it’s free and short.

  • A Mathematician’s Apology, by G.H. Hardy. Considered by many mathematicians as the definitive justification for doing pure mathematics, Hardy’s book stands out as much for his bitterness at the age-related decline in his mathematical faculties as for its defense of mathematics. That’s not to say that the book is without merit; Hardy’s justification of mathematics on purely aesthetic grounds is about as well-stated as I’ve ever read and certainly all subsequent such arguments owe a heavy debt to this book. Unfortunately, Hardy’s aforementioned bitterness, coupled with his rather heavy-handed elitism, occasionally makes reading the Apology feel like listening to your grandfather talk about the merits of rap music. On the other hand, the Apology also gains a certain anachronistic appeal due to developments since its publication in 1940: one of Hardy’s primary justifications of pure mathematics in general and especially of his own field, number theory, is that such pursuits will never yield any military applications (this was especially relevant, of course, in 1940). Of course, with the rise of public-key encryption since the mid-1970s, this is now an absurd claim: modern encryption is intimately connected with and derived from advances in number theory (including some of Hardy’s own results) and it would be rather difficult to argue that encryption doesn’t have military applications.

  • Free Culture, by Lawrence Lessig. A fascinating book, as much for the historical context it provides to the current copyright debate as for its supposedly radical suggestions for altering copyright law. Lessig makes a compelling case that the conception of property rights embodied by current copyright law and organizations like the MPAA is both inconsistent with American tradition and indeed quite intellectually extreme. While I have considerably mixed feelings about his proposed solutions, I think he does an admirable job of arguing that there is a serious problem and that it doesn’t just have to do with intellectual piracy. In fact, my biggest complaint about the book the excessively insular tone it takes towards its readership; apparently, Lessig seems to think that the only people who care about these sorts of issues are “crunchy lefties”, even though he’s intellectually aware that his argument is more broad. See, e.g. the following quotation:

    But there’s an aspect of this story that is not lefty in any sense. Indeed, it is an aspect that could be written by the most extreme pro-market ideologue. And if you’re one of these sorts (and a special one at that, 188 pages into a book like this), then you can see this other aspect by substituting “free market” every place I’ve spoken of “free culture.” The point is the same, even if the interests affecting culture are more fundamental.
    Still, it’s a good book and it’s a free download, so there’s no reason not to check it out.

  • Tartuffe and Other Plays, by Molière. Aside from “The Misanthrope”, I’d never read anything by Molière until this book, but I’d been increasingly coming across references to him in other reading. Unfortunately, I don’t speak French nearly well enough to read this in the original and Frame’s translation is, to put it bluntly, crashingly inelegant, but enough of Molière’s genius managed to survive to make reading this book eminently worthwhile. While “Tartuffe” is obviously the most famous of these plays, the ones that held the most (admittedly, somewhat anachronistic) appeal for me were the two responses to critics of “The School for Wives”: “The Critique of The School for Wives” and “The Versailles Impromptu”. What’s perhaps most amazing about these plays is that they work (at least as literature; I don’t know how well they would hold up in the theater) despite how absurdly meta they really are. For example, a one-sentence summary of “The Versailles Impromptu” would probably be something like the following: A play written and produced on short notice at the behest of the king about the process of making a play on short notice at the behest of the king in which the playwright/director/lead actor decides to take the easy way out by producing a satire of the criticism of his satire of the criticism of his satire of over-protective and jealous husbands. That such a thing is even coherent, let alone enjoyable to read, is as impressive a testament to Molière’s skill as anything I can think of.

  • Hacker Crackdown, by Bruce Sterling. The sci-fi writer’s foray into journalism yields an interesting history of both hacker culture and the legal backlash against hacking in the early 1990s. One word of caution: this book was published in 1994 and deals primarily with events that took place before 1992, so if you’re looking for something about Internet-era hackers, you’ll have to go somewhere else. That being said, it’s still surprisingly relevant to the modern day, especially with regards to the heavy-handed approach taken by law enforcement and big business (AT&T and the Baby Bells in the book, ISPs and music labels today) in dealing with illicit online activities. When Sterling talks about the “purely theoretical” (and quite extreme) damages invented by BellSouth for the posting on various bulletin boards of one of their internal documents or of the indiscriminate confiscation of computer equipment that was demonstrably unrelated to that crime, it’s hard not to see parallels to more recent events. Another free download.

  • Letters to a Young Mathematician, by Ian Stewart. Stewart says that

    Letters to a Young Mathematician is my attempt to bring some parts of A Mathematician’s Apology up to date, namely those parts thatmight influence the decisions of a young person contemplating a degree in mathematics and a possible career in the subject.

    For the most part, he succeeds admirably. Stewart really does do a pretty good job of explaining just what, exactly, it is that mathematicians do, though of course his descriptions are most directly relevant to his own field (complex dynamics and dynamical systems). One major advantage Stewart has going for him is that he’s a very engaging writer, the sort of guy who seems like he’d be a lot of fun to have a beer or five with. This quality is especially apparent by contrast with Hardy, who would probably be appalled by the mere suggestion that he would be the sort of person to have a casual beer with the likes of you. I would definitely recommend Letters to a Young Mathematician to anybody who is either interested in pursuing a career in mathematics or who is just curious what the hell those mathematicians are up to, though I would warn any potential readers that Stewart’s basic conceit (i.e. that this is a hypothetical series of letters to an up-and-coming mathematician, starting when she’s in grade school and ending when she gets tenure) gets old after a while.

  • Glasshouse, by Charles Stross. One of the big complaints about Stross’ last sci-fi book, Accelerando (yet another free download), was that it was, in the end, about an upload culture and that, once people stop being human, they stop being interesting. In particular, by the end of Accelerando, the protagonists live in a culture so technologically advanced that physical death is meaningless provided you back up regularly, physical bodies are as interchangeable as clothing and distance is something understood in the abstract but essentially meaningless. As a result, there’s not exactly a lot of drama in people’s lives. In Glasshouse, Stross manages to re-inject some human interest into this universe by addressing the most obvious potential wrench in the works of the idyllic setup in Accelerando: data corruption. The protagonist of Glasshouse is a veteran of the most destructive war in human history, a war neither he nor anybody else quite understands or even remembers because it was fought against a nebulously-defined group of Luddite fanatics who figured out how to selectively delete people’s memories, especially those related to what the war was about. Now that he’s accidentally signed on for a purported psychology experiment run by those same fanatics in an inaccessible station literally in the middle of nowhere with access only to supposedly late-20th/early-21st century technology, with no offsite backups and stuck in the body of a petite woman, it all boils down to whether he can figure out what’s going on and beat down the bad guys before he is, truly and permanently, killed. The somewhat artificial addition of traditional human fears and anxieties like death, body image and social norms into the post-human milieu makes for better drama and setting most of the action in a more-or-less recognizably turn-of-the-21st-century environment lets Stross shift his attention from producing technical fireworks to actually writing the story. Of course, it also allows him to make fun of the more ridiculous aspects of our own society, which is always good for a few laughs.

  • Finally, some articles of note.

    First, for research-related work, there’s:

    For teaching:

    For fun:

    Short Story:

Cloaking devices!

Via Boing Boing I see that two different papers published in Science yesterday describe how to build cloaking devices. Obviously, it’s all still at a very preliminary stage, but it’s still undeniably cool. Note, though, that there are certain caveats; for example, in the second abstract linked above the author says that “Ideal invisibility devices are impossible due to the wave nature of light”.

That’s true, but actually a bit misleading; even were it not for the wave nature of light, an ideal invisibility device would be impossible for geometric reasons elucidated by Gromov in 1983. Chris Croke (who was on my orals committee, incidentally) mentions this result (as Theorem 2.1) along with many related theorems in this chapter from Geometric Methods in Inverse Problems and PDE Control, which is slightly more readable but still probably over the head of the non-specialist. Basically it says that if you can make it appear to an outsider that light (or anything that follows a “shortest path”) is traveling in a “straight line” through a particular region of space, then it really must be traveling in a “straight line”, Of course, this presupposes that space(time) is actually Euclidean, which it isn’t, but locally it’s probably close enough for the result to still hold. Also, the scare quotes are due to the fact that I don’t really mean a straight line (since space isn’t Euclidean) but rather a geodesic, for those that know what that means. i.e. not being intercepted and re-broadcast or bent around a spaceship or any other object in the region. That doesn’t mean that you couldn’t focus all the irregularities on a very small patch of the boundary (which would only be visible by an incredible stroke of luck on the part of the observer) or make the difference from expectation smaller than the observational error threshold, this latter being the (presumable, based on what I can glean from the abstracts) methodology of the Science papers.

Anyway, the point of the above paragraph is that a perfect cloaking device (i.e. one that is completely undetectable) is impossible for purely mathematical reasons, but it’s still pretty damn cool that close approximations seem to be in the works.

Scripting Vienna

I know it’s not exactly the usual fare here, but today I’m throwing some super-geeky stuff your way. Those that don’t care about RSS readers, AppleScript or my crappy programming skills should probably just skip this entry.

For quite a while now, NetNewsWire Lite, the free version of the popular NetNewsWire, has been my RSS reader of choice. Actually, aside from a semi-disastrous and relatively brief fling with BottomFeeder at the very beginning of my RSS consumption, NNW has really been the only RSS reader I’ve tried.

Anyway, I’ve been thinking recently of upgrading to the full version (and paying the currently-discounted price of $19.99), but, for one reason or another (probably cheapness), decided to check out Vienna, an open-source alternative, and test-drive it against the 2.1 beta of the full version of NNW. As it turns out, the two are, from the standpoint of my usage pattern, practically identical. And Vienna actually suits my aesthetic sense better than NNW. The big feature they both have over NNW Lite is a Webkit-derived tabbed browser living inside the app (so you don’t have to open articles in your browser), which is handy. NNW has synching with NewsGator, which I thought might be useful, since I do a lot of browsing on my Nokia 770, but I hate the NewsGator interface, so what’s the point? The other big thing NNW has going for it is that it can integrate posting to, either directly or through Cocoalicious, Postr, Pukka or your browser…which brings me to the point of this post.

Now, as I’ve discussed before, I do post links to, but do so by way of Spurl, which isn’t supported by NNW. So the support doesn’t really help, unless I want to give up on Spurl, which I don’t. But such are minor impediments to the procrastinating powers of a person facing a gigantically important oral exam in less than two weeks. So I decided to try my hand at an AppleScript solution. Caveat: Despite almost 7 years of Mac ownership, I’ve never really done anything with AppleScript, mostly because I seem to have some sort of weird, highly-specific Ludditism towards scripting (and, obviously, because I’m not such a power user that it’s really necessary). So these scripts are probably pretty poorly-written. At first I was scripting NNW, but, as time went by, it became more and more apparent that, in this regard, NNW would really be no better than Vienna and, in fact, the two were virtually identical for my usage, so I then ended up writing more or less the same script for Vienna. And since that worked out so well, and since I’d gotten into the scripting mood by this point, I ended up writing a script to facilitate the Vienna-to-ecto blogging process as well.

Anyway, operating on the perhaps naïve assumption that someone might find them useful, I’m reproducing these three scripts below the jump, along with a short description of each. And if you have any suggestions for improvements, let me know.

Read the rest of this entry »

Fun with polyhedra

Fun with polyhedra

If you ever wanted to get a sneak peek at how graduate students in mathematics waste time, well, now’s your chance.

Links and things

It’s been a week since I put it up, so I figured I ought to give some explanation of the “Linklist” that I’ve added to the main page. I haven’t tested it in IE, but, when you hover your mouse over the logo, a dropdown list of links to various stuff around the web is supposed to appear (if it doesn’t, you can always just click the logo and be taken to another version of the list).

A linklog is something I’ve played around with before but was never entirely satisfied with how it worked. Since, these days, I usually don’t have time to write posts but still occasionally come across interesting links that I would like to share, this seems like a reasonable compromise.

It’s implemented entirely in CSS (with some Javascript only to fix the fact that IE doesn’t fully implement all of CSS) patterned after A List Apart’s Suckerfish Dropdowns. I wanted a dropdown because I wanted to add some daily links to the main page without cluttering the thing (let’s just say we’ve been down that road before); since that’s impossible, the next best thing would seem to be to make it cluttered only when you want it to be. And CSS over JavaScript (the usual way to implement dropdowns) is obvious, since JavaScript is evil. → Since I’m doing some housekeeping anyway, I should point out that the Tools and Photographs pages have been updated recently and there’s now a crude site zeitgeist at the bottom of the main page.

I’m actually somewhat proud of the background image (which is a modified photo of the blackboard in my office) and the button (which has a virtually-invisible Hopf link in the background), but I’m not entirely sure I’m happy with this particular implementation. One problem is that I can’t seem to get the background image under 180 KB without sacrificing its subtle transparency (which I’d rather not), which sucks for anybody still (horrors!) on dialup. Also, grey is relatively unobtrusive, which I want, but also sorta, well, grey and boring. So if you have any suggestions for how to make it look better, let me know.

Anyway, as for the implementation of the links themselves, they’re collected using Spurl, which is one of the myriad social bookmarking sites out there. Well, that’s not entirely true; I’ve got my Spurl account set up to forward everything along to (another social bookmarking site). Then I’m republishing the RSS feed to my account using the feedList plugin for WordPress.

Why all the contortions? Because it actually makes everything really easy. Rather than having to fire up ecto (or, God forbid, WordPress’s editor) every time I want to post a link (which is what I did the last time I tried this linklog experiment), I just hit the “Spurl!” button on my toolbar whenever I read something interesting, fill in category and tags and write a short description, and the rest is automatic. And scraping an RSS feed is better than going the JavaScript route because, again, JavaScript is evil.

So why am I scraping the feed rather than the Spurl feed? That gets into the heart of the distinctions between Spurl and They’re both nice tools, but they do different things well. Spurl allows both descriptions and automatically-included snips and maintains links in rigid categories, which makes for better posting. Plus, it saves a copy of all my links, which allows full-text search and eliminates linkrot problems. All of these things make it the much better choice if you ever want to go back and actually find and read some link you came across six months ago., on the other hand, takes the “social” part of “social bookmarking” much more seriously: makes it extremely easy to see who else made note of the links that you did (which is a great way of finding other interesting links), allows much more flexible bundling of tags and produces far more customizable means of republishing. So, even though I’m essentially posting the same links to both places and both Spurl and serve nominally the same purpose, I’m actually using them in quite different and complementary ways.

And yes, I know these things have been around for a while. Hell, I’ve had a Furl account for almost two years. But, by itself, is pretty limited and it wasn’t until recently that I discovered Spurl (which implements all the good things about Furl and some extras besides).

Anyway, speaking of innovation (to the extent that the above comprises innovation) and the linklist, I would like to draw your attention to one link I posted there yesterday: the Wiki, which gives extraordinarily detailed (given that the technique was only made public in the last week) instructions for installing Windows XP on one of the new Intel-based Macs. Let’s just say that, for me, this is practically a dream come true. I’ve been a Mac user and owner for close to seven years and love the dependability of Apple’s hardware and the usefulness of (most of) their software.

In fact, the only complaint I have is that some software just doesn’t run on OS X and emulators generally aren’t worth the trouble (though Q looks interesting if they ever add more features). And I’m not talking about games; I’m thinking more along the lines of device drivers, file uploaders and various cutting-edge apps. So a dual-boot Mac/Windows machine would be excellent (Linux stuff I can, generally speaking, do in OSX’s terminal or Darwin’s X11 environment). I’ll definitely be looking into a MacBook (though hopefully they’ll have come up with a less committee-ized name by then) when they get the second or third generation rolling.

(Incidentally, I think Apple’s made the smart move in not trying to prevent people from dual-booting Windows from their Macs: this development will only encourage more people to buy Macs. If Microsoft is smart, they’ll do the same, since that’s the only way they’ll get any money out of me or a lot of other people like me. Not that I have any particular animus for Microsoft, but, though I’m as aware of the shortcomings of both OSX and Linux as anybody this side of drunkenbatman, seven years without the blue screen of death or any major hardware or software failures coupled with seven years of long-distance troubleshooting for my PC-owning parents has ensured that I’ll never voluntarily go back to a Windows-only lifestyle)

Soccer anyone?

Hyperbolic soccer ball

That there is what one might euphemistically call a hyperbolic soccer ball. It’s a model of the hyperbolic plane that I made using this template. For those that aren’t up on their non-euclidean geometry, the hyperbolic plane is a 2-dimensional space of constant curvature -1 (for comparison, the sphere has curvature +1 and a regular plane has zero curvature); it was the first example of a consistent geometry in which Euclid’s parallel postulate doesn’t hold.

The above model is based on the standard soccer ball pattern, which has black pentagons surrounded by white hexagons. That pattern works nicely on a sphere, but you can’t flatten it out; to flatten it, you have to exchange the pentagons for hexagons and then you get a tiling of the regular flat plane. Going one step further gives you the above picture: black heptagons surrounded by white hexagons, which, as with the regular soccer ball, can’t be flattened out without ripping the paper.

See the bigger version for a closer look, where you can more easily discern the obscene geoboard, platonic solids and other geometrical miscellanea cluttered in the background (all extensively documented in the notes to the Flickr version). There’s also another view, which gives a better sense of how the hyperbolic plane is sort of a bunch of saddle shapes all nested together. The best visual model might be the crochet model made by Daina Taimina. Of course, all of these models are incomplete: the actual hyperbolic plane extends out forever but gets totally curled up on itself when you try to embed it into regular 3-dimensional space.

And yes, before you ask, I do get paid for this.

Daddy’s got a new toy

Specifically this, which showed up yesterday, after months of delays and three weeks after I ordered it…which is to say, if you’re thinking of getting one for yourself, you might just want to spend the extra $50 and get it from your local CompUSA, which, as of last Friday, should have them in stock.

As for the 770 itself, it’s an impressive little toy, with a screen that frankly has to be seen to be believed. I’ve never, ever seen such legibly tiny text. Connecting to a wireless network was a snap (as well it should be, given that Nokia is marketing this thing as an internet tablet), the battery seems to last longer than the advertised 2.5-3 hours, and new apps are easy to install from maemo. Even the text entry is relatively easy, given the constraints, and certainly good enough for quick notes and emails (for example, this entry was written entirely on the 770). My only gripes so far are about the lack of 802.1X support (which I imagine is coming) and that Nokia really should have put a bit more RAM in these things.

I’ll almost certainly have more to say in the coming weeks/months, but, for now, here are some screenshots:

770 Desktop Desktop

selling waves from the 770\'s browser selling waves from the 770′s browser

selling waves in full-screen mode Full-screen mode

FBReader Guy de Maupassant’s “The Moribund” in FBReader

Wikipedia Wars

Sean Lynch and John Lopez are both more or less correct about Wikipedia, though they might not agree with each other. Lopez:

Wikipedia is the Internet equivalent of a public toilet. Anyone can use the facilities, including that subset of folks who simply splash feces around for the fun of it, or who are too dumb or ill-bred to get everything inside the rim.

That’s true and it’s a serious problem, but it’s not entirely an insurmountable one. The same could be said of the Internet as a whole, but, while there are plenty of places online where feces-splashing seems to be the primary objective, there are also plenty of quality websites that provide content you can’t find anywhere else. The same goes for Wikipedia. As long as you know going in what Wikipedia is and how it works, it’s easy to use it as an effective tool. For example, you’d probably be better off asking a 5-year-old about some controversial political or social question than looking it up on Wikipedia, because the only people with both the time to write a long Wikipedia entry about something controversial and the perseverance to defend it against every edit are true-believers pushing an agenda. But I’ve almost always found the Wikipedia articles on advanced math topics accurate and useful; to pick an example more or less at random, the article on fiber bundles is nice and straightforward. Obviously, if you’re trying to really do anything with fiber bundles, you need to look in a textbook, but you wouldn’t use a Britannica article as the sole basis for your research, either (and I’m pretty sure the phrase “fiber bundle” doesn’t appear in the Britannica or any other encyclopedia).

That having been said, Lopez’ point about the public goods problem is real, especially since the real experts aren’t wasting their time with Wikipedia in the first place:

Wikipedians on the other hand are busy correcting extra plurals or adding “Wikilinks? to their entry, because they lack both the motivation and the aptitude to add content. And I’m not about to help them, since I have better things to do than reproduce material from expert sources that’re only a mouse click away from anyone who gives half a damn.

Of course, real experts probably aren’t wasting their time with the Britannica, either, but the problem is more extreme with the Wikipedia. In fact, I would tend to agree with Lynch that this is Wikipedia’s biggest problem:

The reason Wikipedia is not as good as it could be is because of its incestuous nature. External links are discouraged in favor of internal links to other content within Wikipedia. The major problem with this is that the smartest experts in any given field probably have their own web sites and can’t be bothered to write in Wikipedia, so why should random people be paraphrasing information that’s already freely available elsewhere? Decentralized knowledge is not about letting anyone edit your one site. It’s about finding and linking to the best content that’s available. The best most people writing on Wikipedia do is paraphrase what they find elsewhere. If paraphrasing is so great why do we need hyperlinks in the first place?

Having participated in and witnessed innumerable debates about Wikipedia over the last couple of years, I can be pretty confident in asserting that Wikipedia consistently gets highest marks for (a) timeliness (b) breadth and (c) ease of use. These are all areas where Wikipedia easily beats the Britannica‘s pants off; sure the Britannica‘s article on Hurricane Katrina will probably be better-written and more accurate when it gets published in 2007, but it won’t be free and searchable, wasn’t available when people were really interested in the subject and probably won’t try to break down all the hurricane-related casualties by county. I’m not saying Wikipedia is better than the Britannica, just that there are some things it does better.

Given the fact that Wikipedia has these inherent advantages, I find it odd that its entire modus operandi seems to be predicated on trying to replicate the Britannica model of being a one-stop source of information. Much better, as Lynch points out, to emphasize their other big advantage over the Britannica: hyperlinks. I’m sure the official discouragement of linking off-site is because the folks at Wikimedia don’t want their baby to become “just another search engine” that gets swallowed up and spit out by Google and Yahoo, but (a) there are a lot worse examples to follow than Google’s and (b) the search-engine market is due for some serious diversification. Using Google is often a real crapshoot; for example, if you’re looking for “fiber product,” you’ll get 8 pages of stuff about the textile industry before the first relevant link appears. Google realizes this, which is why it was a smart move to separate Google Print, Google Scholar and Google Maps (which they’re now calling Google Local) from the regular web search. Just taking my own experience, while I still use Google for general search purposes, I find myself using Google Print, Google Scholar, A9 maps, IceRocket, Technorati, the Internet Archive, memeorandum, IMDB, Mathworld, JSTOR, Whois and, yes, Wikipedia all the time, because each is good at finding certain things I’m interested in (and if I ever come across a good sports-specific search engine, I’ll use that frequently, too). And let me tell you, there’s definitely a niche for the Wikipedia that Lynch envisions:

If Wikipedia had strived to be an editable-by-anyone collection of links to the best information and annotations of those links, it would be much more useful than it is now.