Skip to main content

The Chinese room argument

The Chinese room argument (CRA), originally invented by John Searle, was developed to show that that functionalism is wrong.  Functionalists claim that computers running programs are cognizant if they can do what cognizant beings alone are thought to do.  Turing, an early functionalist, proposed that a computer that could answer questions in a manner that seems human would itself be cognizant.

Searle has you imagine someone in a room reading symbols, using a rulebook for responding to them, and generating replies.  The person who reads the incoming symbols etc. does not know Chinese.  But supposing that the program is well-designed, the person in the room could generate answers that would satisfy persons communicating with it (through written messages) that the room or its operator is cognizant of Chinese, thereby satisfying the Turing test. But the fact that there's a little man inside doing what a CPU does but without knowing Chinese indicates that neither does a CPU, nor a computer as a whole, even if its outside is made to look and move in a human-like manner.

In a way, Searle's thought experiement is itself functionalist.  For it hinges on our recognition of the behavior of the man inside the Chinese room as distinctively human (we imagine him or her as fluent in English or some other non-Chinese language).  We imagine opening the door to the Chinese room and talking to the fellow reading the incoming symbols, going through the rule book written in English (or German or whatever) and typing replies.  That's behavior, alright.  (Actually, if the rulebook were written in Chinese, that would be fine too, as long as the manipulation of symbols did not allow the operator to know what the meaning of the incoming Chinese messages was).

Imagine (per impossibile) capturing a space alien and putting him or her (or it) on the operating table, then separating what seems to be a neuron from the rest of the brain and finding out that this seeming neuron is cognizant of its immediate environment, communicative  and that it has no idea of the things outside of its immediate surroundings.  It's a neural slave, if you will.  I think you would quite rightly doubt the personhood of the one to whom the neuron had originally belonged.

And since we don't expect that sort of thing to happen with isolated human neurons, nor doe we expect that to happen with isolated computer parts, or with CPUs, functionalists can claim that Searle's argument misses the mark.

The way to defeat Searle's argument is get away from computer programs as symbols to be interpreted (for this treats the parts of the computer as if they were individually cognizant, as if they were wholes unto themselves) and to emphasize instead how computer parts, like neurons (and unlike the operator in the CRA) lack individual cognizance; instead, they function as parts of a whole that itself is cognizant.

In this case, might the whole be acting in a manner that is more than the sums of the actions of its parts? More than an effect of the action of the parts?  Such a move would distinguish computation from the interaction that we find in the CRA.  But isn't the person making such a move thinking of computers more like organisms than machines?  That may or may not be true: what's interesting to me, however, is that it is a kind of testimony to the truth of the claim that humans themselves, as organisms, are not mere machines.

In other words, the moves that a reductive or eliminative materialist needs to make to defeat the CRA by attributing non-mechanistic properties to computers may undermine a reductive view of human nature.

Comments

Popular posts from this blog

Dembski's "specified compexity" semiotics and teleology (both ad intra and ad extra)

Integral to Dembski's idea of specified complexity (SC) is the notion that something extrinsic to evolution is the source of the specification in how it develops. He compares SC to the message sent by space aliens in the movie "Contact." In that movie, earthbound scientists determine that radio waves originating in from somewhere in our galaxy are actually a signal being sent by space aliens. The scientists determine that these waves are a signal is the fact that they indicate prime numbers in a way that a random occurrence would not. What is interesting to me is the fact that Dembski relies upon an analogy with a sign rather than a machine. Like a machine, signs are produced by an intelligent being for the sake of something beyond themselves. Machines, if you will, have a meaning. Signs, if you will, produce knowledge. But the meaning/knowledge is in both cases something other than the machine/sign itself. Both signs and machines are purposeful or teleological

continuing the discussion with Tim in a new post

Hi Tim, I am posting my reply here, because the great blogmeister won't let me put it all in a comment. Me thinks I get your point: is it that we can name and chimps can't, so therefore we are of greater value than chimps? Naming is something above and beyond what a chimp can do, right? In other words, you are illustrating the point I am making (if I catch your drift). My argument is only a sketch, but I think adding the ability to name names, as it were, is still not enough to make the argument seem cogent. For one can still ask why we prefer being able to name over other skills had by animals but not by humans. The objector would demand a more convincing reason. The answer I have in mind is, to put it briefly, that there is something infinite about human beings in comparison with the subhuman. That "something" has to do with our ability to think of the meaning of the cosmos. Whereas one might say"He's got the whole world in His han

particular/universal event/rule

While listening to a recorded lecture on Quine's Two Dogmas of Empiricism, it occurred to me that every rule is in a way, a fact about the world. Think about baseball: from the p.o.v. of an individual player, a baseball rule is not a thing but a guide for acting and interpreting the actions of others.  But this rule, like the action it guides, is part of a concrete individual --i.e., part of an institution that has come into existence at a particular place and time, has endured and  may eventually go out of existence.  The baseball rule, as a feature of that individual, is likewise individual.  The term "baseball rule," on the one hand, links us to a unique cultural event; it can, on the other hand, name a certain type of being.  In this way, it transgresses the boundary between proper and common noun. If there were no such overlap, then we might be tempted to divide our ontology between a bunch of facts "out there" and a bunch of common nouns "in here.&qu