Archive for the 'Geek Talk' Category

Quote of the day

As you’ve probably already noticed if you’re accessing the main page, I’ve added a new feature which displays a quotation Curt or I found interesting across the top of the main page. These quotations should follow a standard format: a text block consisting of the quoted material, with the name of whoever said/wrote the quote right-justified on the next line. If the quote comes from somewhere on the web, the author’s name should be linked to that location. The intention is for these quotes to be updated daily (thus “Quote of the day”), but whether that will be sustainable is yet to be determined.

There’s also an archive of the quotes of the day in which each quote will be listed below the date it was posted in the same format I just described; if two quotes are posted on the same day, they will appear separately. The pound sign next to the date is linked to a quasi-permanent link to that quote in case anybody wants to link to it.

Finally, there’s an RSS feed specifically for the quotes of the day and I’ve also added the quote of the day to the regular RSS feed. Getting these to work required changing my RSS feed templates by hand as well as recklessly editing my .htaccess file (and killing the entire website a couple times in the process), so let me know if they cause errors or if you’re subscribed to an RSS feed and the quote of the day doesn’t show up.

I’m using a slightly modified version of the Miniblog plugin to do all this, along with hand-editing of stylesheets, templates and the always-intimidating .htaccess file. Probably not the most elegant solution, but it seems to work so far.

Street-level images + Google maps

Now you can not only find coffee shops offering free wireless internet on Google maps, you can see exactly what the coffee shop looks like before ever having to venture out into the cruel and unforgiving sunlight. In my post on Google Print I mentioned Amazon’s new block-level view for maps and expressed the following wish:

In fact, one can only hope that someone out there is working on combining Google Maps’ search flexibility and aerial photographs, Amazon’s street-level pictures, JiWire’s hotspot finder and the Gmaps pedometer into one world-destroying über-map.

Well, aside from integrating the pedometer into the rest, it’s been done. In a comment I mentioned MetroFreeFi, but that doesn’t give you the block-level view. Instead, just follow these instructions (ð: MAKE: Blog) and then search for, say, “wifi in philadelphia” on Google Maps. Like so (click on the C, D, or E pins to have the block-level view show up). Google’s using Wi-Fi-FreeSpot to find free WiFi hotspots on their maps, which doesn’t get all of them, but does get quite a few.

Sidenotes

You may have noticed in my last post that I’ve been trying out a new (for me) form of footnote (if you couldn’t figure it out, that’s what the writing in the right margin is). I had started thinking, after my last footnoted post, about how the way I have been doing footnotes around here really wasn’t ideal from a user-friendly standpoint, nor, for that matter, from a writer-friendly standpoint.

From a reader’s perspective, footnotes on web pages or blog posts (especially long ones) are problematic because the footnote typically is placed at the very bottom of the page, which can be quite far from the text being footnoted. Hence, it can be a real pain in the ass to scroll down to the footnote, read it, then scroll back up and find where you had left off of reading the main text. This isn’t so much a problem in a regular book, because it’s usually easy to remember roughly where on the page you left off, but, unless you don’t have to scroll to see the bottom of a footnoted web page, it’s much more difficult online.

The standard solution to this problem, as exemplified by John Gruber’s post, is to make the footnote itself (i.e. the superscripted numeral) into a link to the footnote text at the bottom of the page, then provide a link at the end of the footnote text that sends you back to your place in the main text. This second step is actually unnecessary (you can just use the “Back” button in your browser), but this is a relatively good solution to the readability issue.

However, creating all these links is pretty time-consuming and leaves lots of nasty-looking HTML markup in your text, so it’s not so good from a writer’s perspective. Furthermore, it doesn’t seem terribly user-friendly to make your readers constantly click links back-and-forth through a single document just to be able to read it (and completely destroying the functionality of the “Back” button as a way to get back to whatever they were reading before coming to that document). So I started checking around to see if there were some better or easier solutions. Although there are a couple WordPress plugins that simplify the creation of footnotes, they don’t really address my readability concerns (which, as any web designer worth his salt will tell you, are the more important concerns).

Subconsciously, I think I already knew what I wanted: some sort of simple, frames-free implementation of the annotations to this version of Eliot’s “The Waste Land”. Or, even more ambitiously, something like the notes to David Foster Wallace’s essay “Host” as they appear in the printed edition (which have been described as “hyperlinks in print”). Of course, subconscious desires require some sort of outside stimulus to rise to the surface, and I was fortunate enough to come across Peter-Paul Koch’s post.

Koch talks about some of the problems with footnotes on the web, links to some articles about footnotes, talks about how (oddly enough) footnotes don’t exist in HTML or XHTML and, most importantly, talks about how he thinks “sidenotes” are the way to go on the web. All I had to do was see the word “sidenotes” and thoughts of The Waste Land and the DFW article immediately came to mind. So the question became how to do them. Andreas Bovens and Timothy Groves also like sidenotes and Groves whipped up some javascript to make the process of generating the things pretty easy. Still, javascript isn’t really ideal, either, because lots of people turn javascript off in their browsers.

Fortunately, Beau Hartshorne posted links to his solution to the sidenote problem in the comments to all three of the above posts. His solution is, I think, the best of any that I’ve seen (and, with minor modifications, is what I’m now using); the sidenote is generated exactly at the level of the main text the note pertains to, it’s easy to implement with some simple CSS, and it uses the small attribute, which has been suggested as the “right” attribute for footnotes and has the side benefit of allowing even RSS readers and other non-CSS browsers to tell that the sidenote is not normal text, even though it may not be immediately obvious what it is. → It should be noted the “standard” footnote solution generates invalid RSS because relative URLs aren’t allowed inside content tags Hartshorne’s approach even makes it easy to make sidenotes on both sides of the text. Like so ← Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

In an effort to avoid confusion, I’ve also been adding little arrows (→ and ←) next to the sidenoted text to indicate that that’s where the note is coming from, as well as a much larger arrow of the same sort as background to the sidenote itself to create a sort of visual link. Of course, it might be even snazzier to highlight the sidenoted text in various colors and then color coordinate the sidenotes to the relevant highlighted text (like in the DFW article), but I’m afraid that’s more work than I really want to do.

As it is, I’m pretty happy with how the whole thing’s turned out, but if you have any suggestions to make the sidenotes better, let me know. And if you hate them and want me to go back to footnotes, or even if you hate the whole idea of footnotes and want to implore me never to write another ever again, you can still let me know (though I may ignore you).

Opening up

If you haven’t checked it out yet, Paul Graham’s latest essay is quite good. He talks about what businesses can learn from the open source community and movement and identifies three key lessons that might enhance corporate productivity: professionalism is overrated, working in a traditional office sucks, and good ideas generally percolate up from the bottom rather than being handed down from above. On that last note, Graham makes a particularly astute observation (though not, admittedly, an original one):

Ironically, though open source and blogs are done for free, those worlds resemble market economies, while most companies, for all their talk about the value of free markets, are run internally like communist states. → I should point out that I’m not one of these open source zealots that thinks anything and everything involving the term is automatically good and that anything whose source is closed is bad, but, as Graham notes, there are certain unavoidable parallels to be drawn between open source and, to use Popper’s term, the open society

Graham goes on to note that, just as in a Communist state, traditional corporate employers have taken on a very paternalistic role these days:

Nothing shows more clearly that employment is not an ordinary economic relationship than companies being sued for firing people. In any purely economic relationship you’re free to do what you want. If you want to stop buying steel pipe from one supplier and start buying it from another, you don’t have to explain why. No one can accuse you of unjustly switching pipe suppliers. Justice implies some kind of paternal obligation that isn’t there in transactions between equals.

Most of the legal restrictions on employers are intended to protect employees. But you can’t have action without an equal and opposite reaction. You can’t expect employers to have some kind of paternal responsibility toward employees without putting employees in the position of children. And that seems a bad road to go down.

And, as he goes on to note, “[i]t’s demoralizing to be on the receiving end of a paternalistic relationship, no matter how cozy the terms. Just ask any teenager.” Note that what Graham has to say about “justice” not being a consideration in “transactions between equals” is equally applicable in other realms; the “fair trade” nonsense that continually arises in the context of the third world and much of the affirmative-action talk spouted by self-appointed saviors of the black race are just two examples.

Anyway, speaking of open source, Cory Doctorow had an excellent post yesterday explaining why open source DRM is impossible. He correctly points out that there’s a big difference between encryption like SSL (which certainly admits of open source implementation) and digital rights management:

In SSL you have a sender, a recipient and an attacker. The attacker is never supposed to be in possession of the cleartext. It doesn’t matter, however, if the recipient gains access to the cleartext. That’s why you can have open source SSL.

In DRM you only have a sender and an attacker, who is also the recipient. DRM relies on the attacker/recipient only gaining access to the cleartext while their machine is in the grips of non-user-accessible code that restricts what they can do with the cleartext (in particular, DRM seeks to ensure that the cleartext can’t be saved back to the drive while still in the clear).

And so, obviously, DRM implementations can’t, by definition, be user-modifiable. And there are probably a lot of people out there who ain’t gonna like it too much when they figure it out. Including, ironically, publishers. → see also: my rant, Adam Engst’s rant, Dan Burk’s paper (PDF) and Cory Doctorow’s classic rant.

Oh, and just for the record: whoa.

Search (to) your (heart’s) content

I’m assuming everybody is already familiar with Google’s plans to scan all the books in the libraries of Stanford, Harvard and the University of Michigan and make the contents of all those books searchable. Not a problem for books in the public domain or whose publishers have given permission for them to be searchable, but publishers aren’t too happy about the fact that Google plans to scan every book and make them all searchable via Google Print. Now Google is planning to hold off on scanning copyrighted works for which they haven’t already received permission until November to give publishers a chance to opt out (ð: eWeek).

The Association of American Publishers and its extremely annoying chair, Patsy Schroeder, are moaning about this opt-out policy:

“The great concern of not just publishers but the entire intellectual property community is Google’s turning copyright law on its head,” [Schroeder] said. “All the burden is now on the rights holder.”

Okay, she might have a point except for one important thing: if Google turns up a search term in a copyrighted work they haven’t received permission to reproduce, you only get a couple of sentences of context around that search term. Even in books for which they have received permission, you only get a couple of pages (sounds complicated, but these screenshots pretty much tell the whole story). And, having tried it for quite a while today in a couple of different books, I can definitely say that trying to read even just ten pages on each side of a search term in a book for which permission had been given is not only extremely laborious, but probably impossible.

In any case, a couple of sentences from a 300-page book is pretty tiny, certainly no more than is regularly excerpted in book reviews, scholarly papers, etc. and nobody ever raises a fuss about those usages. Of course, that’s exactly Google’s defense: that what they’re doing is covered by fair use. Patsy disagrees. Which pretty quickly boils down to a legal argument, which I don’t particularly care about (and which isn’t clear-cut one way or the other).

The more important issue is this: even if what Google’s doing is technically illegal, why in the world would any publisher object? Google Print not only makes the books you already own easier to use, but provides great advertising for new books. As is my wont, let’s see an example of the latter first: today I plugged my father’s name into Google Print and was surprised to see that he’s mentioned in a couple of books. It turns out that the only interest I would have in those particular books is pure morbid fascination with people who take words like “process” way too seriously, but, if I’d been more seriously interested in any of the books that popped up, there are links to buy from several different vendors right there. In other words, this is a great way to find new books on topics of interest and, therefore, is great directed advertising for book publishers. Put a Nokia 770 in my hands, the entire Stanford library on Google Print and the desire to learn about some new topic in my head (it happens every once in a while), I’ll be buying books left and right (or, rather, I would be, if publishers could get their goddamned act together and back some universal electronic publishing standards [ð: TeleRead]).

As for the second point, between my brother, my father, and myself, we probably own every book Mark Twain’s ever wrote. My father also owns Michael Crichton’s State of Fear, which quotes Twain as having written:

There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

It’s a great quote, but, despite knowing Twain wrote it and owning probably every Twain book ever, it would take forever to track down just exactly where he wrote it. However, judicious use of Google Print demonstrates that this quote comes from Life on the Mississippi and, furthermore, that it concludes a very amusing little paragraph. Now that’s what I call making my books more useful! Point is, publishers should be thanking Google for both the directed advertising of their books and for making those same books better (and, therefore, more desirable) at no cost. Instead, there are rumblings of lawsuits.

Note that Amazon‘s (selective) full-text search of the books they sell is the only comparable (though less ambitious) service already available. Of course, it should come as no surprise that Google and Amazon are at the front of this curve, since they’re about the only companies out there with both the vision to dream up something like this and (more importantly) the resources to implement it (though a nod of the head is due to the resourcefulness of both Project Gutenberg and Wikimedia).

Speaking of cool Amazon stuff, they’ve now got a feature in their maps section that allows you to see street-level photographs of locations on the map. It’s only available in 24 US cities (so far, anyway), but basically what they did is drive down a whole bunch of streets in a lot of big cities with digital cameras rolling and hooked into a GPS receiver, so you can not only see where the bar I went to last night is on the map, but what it looks like (not much). Of course, Google Maps are easier to search, but the street-level view is a hell of an idea and a nice complement to Google’s aerial photos.

In fact, one can only hope that someone out there is working on combining Google Maps’ search flexibility and aerial photographs, Amazon’s street-level pictures, JiWire’s hotspot finder and the Gmaps pedometer into one world-destroying über-map.

And then, on sixth down, the Eagles gained 4 yards

“It has nothing to do with football,” [Mildred] Bazemore said. “It has to do with the mathematical concepts that you’re studying.”

GRRRRAAAAAGGGHHH!

That’s approximately how I reacted to the above quote, taken from a news report about a particularly boneheaded standardized test question devised by the geniuses at the North Carolina Department of Public Instruction (hat tip: FO). The question asks students to determine a football team’s average gain on the first six plays of some hypothetical football game. Unfortunately, this hypothetical game doesn’t abide by the most basic of football’s rules:

The team opened with a 6-yard loss, a 3-yard gain and a 2-yard loss, which would have made it fourth down with 15 yards to go for a first down. The team’s fourth play was just a 7-yard gain, yet it maintained possession for a 12-yard gain and a 4-yard gain on two additional plays.

Now, it doesn’t particularly bother me that the test question is badly written (and pretty much guaranteed to confuse anybody with an ounce of football awareness); these things, though unfortunate, do happen, no matter how much editorial oversight there is,1 as anybody with an ounce of teaching experience will tell you.

No, what gets my blood boiling is the nonchalant response on the part of Ms. Bazemore, the chief of the DPI‘s test development section. This notion that such a subjunctive test question “makes sense mathematically” and “has nothing to do with football” is, I submit, symptomatic of the educational institution’s generalized and deplorable mistreatment of mathematics at both the primary and secondary levels.

Okay, admittedly, I’m being a bit hyperbolic here, but the basic point is this: Bazemore’s comments suggest that she believes that there is a disjunction between “mathematics” and “the real world” (here embodied by football), that the platonic ideal of (-6+3–2+7+12+4)/6 = 2.67 is only sullied by the interference of words and ambiguous readings. In other words, she seems to think that mathematics is (or, at least, should be) purely abstract, purely computational and, as a result, utterly boring to anybody that isn’t autistic.

Again, interpolating all of this from some throwaway comment to some undoubtedly bored reporter is a bit extreme, except for the fact that virtually every public school teacher and administrator I’ve had the extremely mitigated pleasure to interact with holds this exact view (I went to public school K-12, so I couldn’t tell you about private school teachers or administrators). This is especially true of elementary school teachers, who either secretly hate math or are exactly the sort of detail-oriented obsessive-compulsives who loved memorizing their multiplication tables as a kid but hated word problems and philosophy classes, but it also tends to hold among middle- and high-school math teachers (somewhat more surprising, since these people teach math exclusively, in contrast to their primary-school counterparts).

This all derives, I think, from a poor understanding of what mathematics really is, which is certainly understandable, but the end result is that the misunderstanding is propagated to the next generation for pretty much the same reason it got propagated to the last generation: teachers make math classes miserable, so students not unreasonably conclude that math is miserable.

So what’s the misunderstanding? Basically, the notion that math is conceptually equivalent to memorizing formulas and plugging numbers into them. Certainly, this is the bulk of the content of your average math class in both primary and secondary schools and even in most college math classes below about the 300 level (which range, needless to say, encompasses the totality of the majority of the population’s experience with formal mathematics education). Rare indeed is the math teacher who seems to understand and, more importantly, can communicate that mathematics is fundamentally not about plugging numbers into formulas but rather about coming up with those formulas in the first place. No matter which branch of mathematics we look at, from the purely theoretical to the applied, the mathematicians or scientists working in that branch are, fundamentally, taking what they know and trying to synthesize it in some original and creative way to produce some new theorem or formula that better describes the situation. The data that goes into this synthesis may range from the completely abstract to the completely concrete, but the basic process is pretty much the same and totally at odds with the plug-and-chug process, which produces nothing conceptually original.

And yes, I know the traditional objection of the public schools: “That all sounds great in theory, but you can’t even get to that point without memorizing your multiplication tables or simple integrals.” Which is all true, in a sense, but also completely false. It’s probably true that you won’t ever prove the Riemann Hypothesis if you don’t know that 8×9=72 or that ∫cos x dx = sin x + C (though there’s no theoretical impediment), but such a perspective ignores the fact that, at some point in the course of human history, such “elementary” questions were just as mysterious, even to the intelligentsia, as the Riemann Hypothesis currently is and their solutions were just as exciting as a proof of the Riemann Hypothesis would be today.2 Whether the actual history of such problems is formally introduced into the course of instruction or not (and, despite generally being in favor of such an approach, I do have mixed feelings about it), there’s certainly no reason not at least to try to impart the same sense of mystery and discovery into the proceedings that the original discoverers/inventors of the material experienced. In other words, rather than taking the attitude that “I have a bunch of facts which I will try to cram into your head,” one would like to see more math teachers take the attitude that “I am going to try to give you the support and the tools you need to discover a bunch of interesting and useful facts for yourself” (with the additional side benefit that the students may discover more of those facts than appear on the curriculum). Admittedly, this is supposedly what “New Math” was (partially) about, but the methodology there was (or at least became) entirely wrong; a student’s feelings about math are pedagogically null to his fellow students.

The first step in this path, needless to say, is to try to view “word problems” less as particularly inefficiently coded messages (wherein we encode the “real problem,” which is something like (-6+3–2+7+12+4)/6=2.67, into this ambiguous cipher we call the English language) to be decrypted by the student and more as examples of actual, conceivable problems that might arise in the student’s experience and which can be attacked using various mathematical tools and tricks which he has (or, at least, should have) at his disposal.3


1. Though Colby Cosh makes an interesting point in the context of journalism that too much editorial oversight may actually be a bad thing. His entire perspective is extremely interesting, especially since he is both a professional journalist and an experienced and widely-read blogger.
2. Although there’s some question as to whether anybody would actually recognize a proof of the Riemann Hypothesis even if it slapped him in the face, an issue addressed, more or less, in the provocatively-titled “Definitional Drift: Math Goes Postmodern.”
3. And yes, I know I’ve addressed this issue several times before (see “From politics to mathematics and back,” “A beginner’s guide to producing new results in mathematics,” “…you just get used to them” and, tangentially, “Mathematics and sex” for four of the more recent examples), but, as something of a math teacher myself, this is an issue that I think a lot about and, more importantly, I think I’m getting closer and closer to actually expressing myself clearly on the subject.