December 01, 2003

Articulation and Reason

Posted by shonk at 12:31 AM in Ramblings | TrackBack

In the comments section associated with Aaron Hartter's post on free will at No Treason I posted the following in response to a claim made by someone calling himself "The Serpent" that belief in free will is irrational because "free will" is a concept that cannot (at least in his opinion) be precisely articulated:

Though I suspect nobody will completely agree with me, I have issues with the serpent's implicit claim that belief in something that cannot be precisely articulated is irrational.

For example, I believe that 2+2=4, which most people would consider rational, even though I cannot precisely define what "2" is in a way that seems obvious to the non-mathematician. However, at least in this instance I can precisely define the concept (though there's a strong argument to be made that the structure of logic is, at best, tenuously connected to reality, whatever that is). More troubling, I think most of the choices I make in the course of a day are rational in the sense that, among the possibilities available to me, the course of action I choose is the one that maximizes my perceived personal utility. But I tend to think that it would be metaphysically impossible for me to precisely articulate what my "personal utility" is to another person.

Now, I know I'm putting the cart before the horse a bit with that last example, but the real kicker is this: I think the one thing we could all agree on is that it is rational for me to think that I exist and that I have a mind. However, I am not at all certain that I could precisely articulate what my mind is. Ultimately, all such definitions come down to is the following: "My mind is me". Which is rather circular, if you ask me.

My point is not to be a sophist, but rather to explain why the "precisely articulated" standard is probably far too rigorous as a judge of what is rational.

Now, I know this is bucking my recent trend of merely rehashing posts made by others, but I want to explore this idea a bit more and get back to the pseudo-intellectualism that is my usual modus operandi.

Before I do so, though, I probably need to back up a bit and explain a little about my metaphysical framework, because I think it's both a bit unusual and much more common than is usually appreciated. This explanation is far from complete, is perhaps even contradictory and I'll probably disagree with the whole thing by the time I re-read it tomorrow. I get back to the point in the third-to-last paragraph, so feel free to skip ahead.

In the past, I've described myself as a "probabalist" because it's the most accurate, though somewhat misleading, title I can think of. Basically, I think we can only have probablistic, rather than absolute, knowledge. For example, I think, with a high degree of certainty, that the sun will rise tomorrow because it has risen every day that I can remember, because all astronomical models I know of predict that it will, etc. However, I have to add the qualifier "with a high degree of certainty" because I acknowledge the possibility that the sun may not rise tomorrow, either because it could explode between now and then, because I may have been deceived into thinking it has risen every day that I can remember even though it really didn't, because maybe the astronomical models are all wrong. This sounds pedantic, but really all I'm doing is acknowledging that everything we see, read or remember is only accurate to a probability that is strictly less than 1.

Again, I feel like I'm not making this point as clearly as I would like. Suppose, for example, that I'm 99.9999% sure I'm not living in a solipsist's universe, because everything I've experienced is in accordance with the idea that there is a universe outside of my brain that I (imperfectly) sense through sight, sound, touch, etc. Right there, I've reduced my certainty that anything I see or experience is in accordance with reality from 100% to 99.9999% and, since any knowledge I have about the exterior world is based on this assumption, that probability serves as an upper bound on the certainty I may have in anything I claim to know.

The astute reader will quickly realize that this outlook leads to a rather nasty problem, namely that every bit of knowledge is based on an infinite chain or ladder of probabilities, each less than 1. Getting back to the question of whether the sun will rise tomorrow, my knowledge that the sun will rise tomorrow is based on the 99.9999% probability mentioned above that I'm not living in a solipsistic universe. But it's also based on the probability that my memory of the sun rising in the past is accurate (say 99.999%). In turn, I have to assume that past events are good predictors of future events; but this assumption is based on my confidence that I can extrapolate from past events - the induction principle, which I am, say, 99.99% confident in. Before I can even apply this confidence in the induction principle, though, I have to assert that this is a reasonable situation in which to apply the induction principle, an assertion I might only have a 99.9% confidence in. One might think that at this point I can merely multiply the stated probabilities and then say that I know, with 99.8% certainty, that the sun will rise tomorrow. Unfortunately, it doesn't stop there. Now, I have to assign a probability to the likelihood that my evaluation of this probability is accurate. And then a probability to the evaluation of this probability, and so on, ad infinitum. Troublingly, as I iterate this process, I arrive at an infinite product of terms each of which is positive and less than one. Which product will tend to zero. And even at that, the whole thing is contingent on how likely it is that my understanding and application of mathematics is accurate and the probability that mathematics has any relevance to the real world.

By this point, if you're still reading, I'm sure this sounds like a huge, sophistic boondoggle, but I'm not convinced that it is. On a practical level, we don't need to worry about this infinite chain of probabilities. If some piece of knowledge seems accurate with a high degree of certainty, we can just say that we "know" it and move on. And this accords with how people really think and act. When I'm hungry, I don't go into metaphysics, I eat something. When I'm tired, I don't reason from first principles, I go to bed because I know that doing so is likely to make me feel better even though I know, practically, that sometimes when I'm tired, I can't go to sleep.

We can even deduce morals from this framework and I would argue that doing so more accurately reflects the way we really make moral judgments. For example, I knew that killing was wrong long before I was exposed to rigorous philosophy. In making that judgment, I was intuiting in a probabalistic sense rather than reasoning deductively. In that sense, the whole approach differs quite a bit from the usual relativist cliché, since it allows for one to know that something is wrong without having to engage in abstract deduction.

Anyway, getting back to the original point, it should be obvious why I don't think it's necessary for something to be precisely articulated in order to make belief in that thing rational. However, even if you vehemently disagree with the (admittedly imprecise and non-rigorous) metaphysics described above, I think there's good reason for rejecting the notion that rational belief requires precise definition.

The reason is this: nothing (that I'm aware of) can be defined precisely enough that there can be no quarrel about that definition. And yes, I'm aware that if you take that statement too literally, it leads to a paradox. Most everybody would agree that it's rational to believe murder is wrong, but no two people that I've encountered have exactly the same definition of what constitutes "murder". Is abortion murder? Many would say yes, others would say no. Is a soldier killing another soldier on the battlefield murder? Even the most diehard militarist would probably agree that it depends on the war, on the soldier and on the situation. Is it murder when a woman shoots a man attempting to rape her? Well, how did she know he was going to rape her? What about when a man shoots a burglar? Or when a man shoots the meter-reader, thinking he was a burglar?

I think the problem I'm trying to express is twofold. First, so far as I know, nobody's knowledge is complete. One can always think of situations where a person's imperfect knowledge would lead him to do something that an "objective" observer would consider wrong. Second, amazingly accurate and reflective of reality though it is, language is imperfect. When we call something "murder", we are trying to apply a label to an abstract concept, but the only way of understanding this label is by allusion or comparison. As another example, when I use terms like "government" or "the State", I'm referring to an abstract concept, but the mental image that most people get is of some concrete exponent of that abstract concept: the White House or the roads or a congressman or the DMV or a ballot or the police. Language, in a sense, is general, whereas our experience is particular; for this reason definitions are never really precise, except perhaps in abstract or artificial environments like mathematics or logic.

As I have, by now, thoroughly muddied the waters of discourse, I'll leave you with two quotes from Wittgenstein's Philosophical Grammar:

What is spoken can only be explained in language, and so in this sense language itself cannot be explained.

Language must speak for itself.

Philosophical Grammar, pg. 40

(While thinking philosophically we see problems in places where there are none. It is for philosophy to show that there are no problems.)
Philosophical Grammar, pg. 47