If I know Chinese does that make me conscious?

“Of course, the failure to pass the Turing Test is an empirical fact, which could in principle be reversed tomorrow; what counts more heavily is that it is becoming clear to more and more observers that even if it were to be realized, its success would not signify what Turing and his followers assumed: even giving plausible answers to an interrogator’s questions does not prove the presence of active intelligence in the device through which the answers are channeled.”

It is fairly evident that Mark Halpern takes a low view of the proponents of “strong AI,” the idea that machines can actually be created which will be functionally indistinguishable from human minds. The above passage is the summation of his argument that not only have programmers manifestly failed to create any computer remotely close to passing the so-called “Turing Test,” but that even passing that test would be insufficient to demonstrate the possession of “active intelligence” on the part of the machine. The “Turing Test” was basically the claim by Alan Turing that scientists would have succeeded in creating artificial intelligence if, in responding to a set of questions, the interlocutor was unable to distinguish the responses of a machine from those of a human. Halpern gives several examples of incompetent judges who are unable to distinguish the real hallmarks of intelligence in conversation when placed in a position to interview machines.

He concludes that these failed tests basically confirm the “Chinese room” argument of John Searle. Searle presented a situation in which a man locked inside room was fed questions written in Chinese. The man, not knowing Chinese but having a lexicon of set responses to the questions posed to him, is able to provide intelligent responses to the questions without actually knowing what he is saying or what is being asked of him. This seems to offer strong logical evidence that a test of the sort proposed by Turing is quite inadequate to determine the existence of intelligence or consciousness.

Of course, almost any test where input can be anticipated and output “pre-programmed” is liable to this vulnerability. It is only by witnessing the continuous presence of apparently intelligent behavior, consistent through a variety of new and unanticipated scenarios, that the presence of intelligence can be inferred. I object, on the other hand, to the way in which Searle elsewhere uses his thought experiment to imply that the existence of consciousness can never be inferred from external signals, that its essence consists of the presence of the “qualitas” of perception, the experience of perception which is only accessible to the perceiver. But of course in that case the presence of conscious intelligence could never be inferred, for a conscious individual can only vouch for his own awareness, since he can only perceive his own qualitas. Needless to say, this is a very solipsistic conclusion, and even if it is metaphysically correct, it is not a useful basis for any potential interactions with other conscious entities. As David Hume has taught us all, in some strict sense all inferences are unjustified, but we don’t really have any choice at our disposal, and the “Chinese room” is only valid to the extent that it holds evaluation of artificial intelligences to the same standard as human intelligence, not by subjecting them both to unreasoanble skepticism.

11 Responses to “If I know Chinese does that make me conscious?”

  1. shonk Says:

    Searle’s Chinese Room argument is sort of bogus, though. It only works if the interrogators only ask questions to which the man has answers in his lexicon. So long as the lexicon is finite and the range of possible questions is infinite (this second being the questionable, but not really unreasonable, assumption here), there are any number of questions the interrogator could ask to which the man would not be able to give intelligent responses. Assuming whoever compiled the lexicon was extremely thorough, it might take a long time for the interrogator to hit upon a question the man could not answer, but that’s basically irrelevant (formally this could probably be reduced to a halting problem).

    Of course, the response to this is to argue that, rather than giving strict rules to which the man in the room is entirely circumscribed (i.e. an extremely long list of statements of the form “If A is the input, output B”), give him a schema for inductively building responses (i.e., rather than having separate entries in the lexicon for “What is your name?” and “What is your mother’s name?”, have some rule that keys on “name”, “what” and the structure of the question and points him to a list of names along with some other rule telling him which name to pick in which scenario). This has the advantage of allowing infinitely many possible responses, so the above cardinality argument ceases to be relevant. However, I would tend to argue that, for such a schema to always fool the interrogator, it would essentially have to teach the man how to speak Chinese, at which point we’re out of Searle’s Gedankenexperiment.

  2. Curt Says:

    Right, that is essentially what I was trying to suggest by saying that it is only over a period of time (even if that period of time is only a long conversation that injects unexpected questions) that we can evaluate the presence of intelligence. Of course, even though the Chinese room might not be able to actually mimic intelligence, the critique of it paradoxically also shows the basic limitations of AI as currently constituted, since as far as I know no one has actually been able to really transcend the basic lexicon model in designing artificial minds. The fundamental problem seems clear to me: creating genuinely autonomous, thinking minds seems pretty much inseperable from actually re-creating life. In other words, as obvious as it sounds, machines can’t think because they’re not alive. Only living things seem to be able to generate their own outputs, so to speak.

  3. shonk Says:
    In other words, as obvious as it sounds, machines can’t think because they’re not alive.

    Depending on your definition of life, this verges on the tautologous.

    Not that I disagree, by the way. Any machine that possesses intelligence should, in my view, be considered alive, which will create some real ethical issues if it ever happens.

  4. Curt Says:

    But it’s not tautologous to “strong AI” types, or I assume they wouldn’t be trying in effect to create thinking minds in isolation from the organic systems of living beings. So it’s only tautologous if you have already accepted the claim I am making. I should also say, by the way, that I think the Chinese room is only supposed to demonstrate the theoretical point that coherent answers to given questions are not sufficient indicators of intelligence. It certainly has the problems you point out as a literal experiment, but the inadequacies of the Chinese room are, I think, by and large the inadequacies of computer brains today. Parenthetically, I find this mania to replicate our exact minds to be a little odd. While much of the thought on the subject like the Turing test and the Chinese room show the difference between mere execution of mental tasks and actual conscious intelligence, in some ways it seems odd that this is so important to us. I mean, we have already gotten computers to do many, primarily calculation-based, tasks that no human could do, and it seems to me that one of the great values of computers has been how they have complemented and helped us to overcome our own deficiencies, and I am not sure what the objective value of obsessively focusing on creating machienes with all the same strengths and weaknesses that we have is. On the surface, at least, it seems rather narcissistic and vain (in both senses, perhaps).

  5. shonk Says:
    But it’s not tautologous to “strong AI� types, or I assume they wouldn’t be trying in effect to create thinking minds in isolation from the organic systems of living beings.

    Well, I think that’s a bit much. It’s not at all evident that for something to be alive it needs to be “organic” in the sense we usually mean (carbon-based, eventually dies, eats and shits, etc.). So, at least a priori, it’s not ridiculous to try to create new life “in isolation from the organic systems of living beings”…unless you mean that statement much more broadly than I’m assuming.

    As for your second question, I think the reason a lot of people want “intelligent” computers is because an intelligent computer would (in theory) be able to display the adaptibility and capability for pattern recognition that humans are so good at while still being capable of the computational speed and precision which makes computers so useful. Anybody who’s ever cursed the blatant stupidity of computers knows why this might be desirable, but there are necessarily some drawbacks. First, there’s the obvious science fiction conceit of non-benevolent super-intelligent computers. A cliché, yes, but one that can only really be ignored (if you think strong AI is possible) by wishful thinking. Second, there’s your point about building computers in our image. It’s not at all evident that adaptibility, pattern recognition, etc. (let’s just label this general skill set “common sense”) and computational wizardry are necessarily compatible in a human-type brain (of course, the latest strong AI fantasy involves quantum computing, which, presumably, would be qualitatively different in important respects from human neural circuitry). Autistics certainly aren’t known for their common sense, e.g. And most people with a lick of common sense get bored doing long strings of calculations all day; whenever possible they subcontract such work out to graduate students, clerks, machines or other subserviants whenever possible (which raises the amusing hypothetical of an intelligent computer building another, dumb computer to do its calculations).

    As to why AI types are trying to approximate our own way of thinking in computers, though, the answer is rather easy: because the way we think is the only way we know how to think. If they could come up with a better way, they would surely do that, but it’s rather challenging to imagine something with which you have absolutely no experience. Which is the common lament of the differential geometry student (hey: that’s me) trying to visualize, say, six-dimensional spaces (which is rather an easier problem than coming up with an entirely new thinking paradigm; after all, your basic six-dimensional space is just 2 copies of three dimensional space glued at right angles to each other).

  6. Curt Says:

    I think the reason a lot of people want “intelligent� computers is because an intelligent computer would (in theory) be able to display the adaptibility and capability for pattern recognition that humans are so good at while still being capable of the computational speed and precision which makes computers so useful.

    I understand what you’re saying, and maybe I am focusing too narrowly on the Turing test, but I am objecting to what seems to me the standard proffered of constructing a machine which is indistinguishable from the human mind in both its strengths and its weaknesses, not just the good points of both. In fact, if you were interviewing an unknown subject and asked it to multiply two twenty five digit numbers and it produced an answer in two seconds you would probably assume that it was either a machine or was using a machine to answer the question. It would not be considered a very typical human skill. I’m not saying that what you are proposing is not a better standard, nor that there is not a value in being able to build some human conceptual skills into a machine, but I just am not sure what the value of replicating the human mind is. Whether I am setting up a straw man I leave up to your judgment.

    Which brings me to the other point. You say: “It’s not at all evident that for something to be alive it needs to be “organicâ€? in the sense we usually mean (carbon-based, eventually dies, eats and shits, etc.). So, at least a priori, it’s not ridiculous to try to create new life “in isolation from the organic systems of living beingsâ€?”. Maybe it would seem less problematic if I said “in isolation from the physical systems that all known living beings possess.” So yes, it is possible that organic carbon-based physical structures are not essential to life, but from an empircal perspective, would it not be reasonable to infer that a living mind would need to possess the physical characteristics that all known living minds share? You might reply that the functional standard of a car, for example, is that you can drive it and make it perform car functions, not that it possess an engine, lights, wheels etc. But it’s not like we’re exactly confronting functional success in the AI field here, and one would think that after 30 years of failure they would entertain the notion of trying to recreate the physical system in order to produce the desired output. Of course, there are some researchers out there trying to work towards AI by “reverse-engineering” the brain; Jeff Hawkins, for one.

  7. shonk Says:
    But it’s not like we’re exactly confronting functional success in the AI field here, and one would think that after 30 years of failure they would entertain the notion of trying to recreate the physical system in order to produce the desired output.

    The problem with that is that nobody really understands how the brain works at all. Sure, we know how neurons are connected to each other, that certain parts of the brain seem to do certain things, etc., but nobody really has any idea how it all coalesces from isolated neurons firing randomly to whatever the hell is actually happening inside your head.

  8. Curt Says:

    Right, but that doesn’t change the fact that we’re probably going to have to learn a lot more about neural mechanics before any substantial progress will be made on this whole fantastic AI quest.

  9. shonk Says:

    Absolutely.

  10. Dave Says:

    Real interesting, but the computer analogy of human thinking is way out of date. The explanation comes from modern linguistics and neuroscience, in which the mind acts in a computational manner, but not like a computer. Not that I understand it all, but Steve Pinker has written in “ The Blank Slate� about how “ mental life can be explained in terms of information, computation and feedback. Beliefs and memories are collections of information like a data base—residing in the structure of the brain. Feed back loops receive information about the discrepancy between goals and the current state of the world—as detected by the senses. Then the organism executes operations that have evolved to reduce the difference.� Say you put your hand on a hot stove. You react by pulling your hand away. Say another man puts his hand on your wife’s breast. You act to reduce the chances of that happening again; unless he is a doctor then you pay him for it. A computer could only do these things by rote, and without understanding, motivation or feeling. That is why they are so smart at computation but so stupid in “real life.�

  11. shonk Says:
    Real interesting, but the computer analogy of human thinking is way out of date.

    I think that was Curt’s (and certainly my) point. As it currently stands, there is a major qualitative difference between how humans think and how computers “think” and it’s going to take major progress in understanding the former to create advances in the latter (if, that is, it’s even possible or desirable for computers to be able to think as humans do, which I think both of us question).

    That having been said, I don’t think there’s any theoretical impediment to creating “thinking computers”, however we ultimately want to define that term (although I would be inclined to bet that, if it ever happens, the boundary between “natural intelligence” and “artificial intelligence” will be much less distinct than it is currently). Certainly there is none for metaphysical materialists, and most of the more reasonable dualist positions admit the possibility. What needs to be understood, though, is that (again, assuming such things are really desirable in the first place) this is a pretty long-term goal, not something that’s even remotely likely to happen in the next few decades.

Leave a Reply

If your comment doesn't appear right away, it was probably eaten by our spam-killing bot. If your comment was not, in fact, spam (and if you're actually reading this, it probably wasn't), please send me an email and I'll try to extricate your comment from our electronic spam purgatory.