Skip to main content

Testing Turing

In 1950, Alan Turing proposed a test for determining whether a computer program is cognizant.  We can test computer programs, said Turing, on the basis of their ability to communicate in a manner that seems human.   This test (called the Turing Test) is an example of functionalism, which says that if something functions like an X, then it is an X. Applied to computer programs, functionalism says that computer programs that function like cognizant beings are in fact cognizant.  Or to paraphrase that in Forest Gump-like language: cognizant is as cognizant does.

There are plenty of problems with the Turing Test, but my interest right now is to point out something positive about it and to develop a point from there.

Suppose that a volunteer named Alex agrees to communicate via a keyboard/computer screen for the purpose of finding out if his interlocutor (named Cecilia) is a computer or a human.  After twenty interactions, he is thoroughly convinced by Cecilia's responses that she is human, but upon reading response number 21, he easily figures out that she is not.  Actually, Cecilia is not even a program but a preprogrammed list of responses, based upon careful guesswork by a group of scriptwriters who have watched Alex interact with other computers. These scriptwriters are very observavant and very, very lucky--at least for the first 20 responses, for their guesses correspond perfectly what Alex typed.  Response number 21, however, is quite off base and the ruse is over.

The Turing Test tells us that when Alex said after question 19 that Cecilia is obviously human, his mistake was not merely a false prediction: rather, Alex was mistaken about what was going under the hood of his interlocutor (perhaps "bonnet" would be better than "hood," as it refers, in British parlance, both to the hood of a car and to something worn on the head).  In this way, functionalism is superior to behaviorism, which treats subjects like black boxes (i.e., it simply doesn't care what goes on in there). For a behaviorist, the statement "Cecilia is human" means no more than "Cecilia is likel to give human-like responses." Functionalism, on the other hand, does not reduce the meaning to mere predictions: it is also concerned with what is going on within the subject while it is behaving as it does. The Turing test, as a functionalist hypothesis, asserts that an operating computer program (i.e., what is going on "under the hood") that is able to communicate in a human-like manner is or has a mind.

After recently reading Thomas Nagel's article ("What is it like to be a Bat?"), however, I think that I can offer further specification of what is going on under the hood.

When Alex initially thinks that Cecilia is human, he mistakenly thinks of the source of the messages that he is receiving as doing things we all to when he communicating: remembering, wondering, guessing, imagining, intending, hoping, etc. Alex errs by thinking of Cecilia as "one of us." He is not merely concerned with her disposition to produce human-like words (such a concern would could have when thinking of things to which one in no way ascribes consciousness).  Rather; rather, he is concerned (to put it in language similar to Nagel's) with "what it's like" for Cecilia to be engaged in communication.

Recap, so far: in order to avoid a superficial, behavioristic account of how we recognize other conscious beings, we must join functionalists in their concern for what is going on "under the hood" of the purportedly conscious entity.  That concern, however, is directed toward activities such as remembering, imagining, etc., that cannot be measured directly (although they may be correlated with measurable characteristics).  At this point, however, one is no longer a functionalist.

Comments

Popular posts from this blog

Dembski's "specified compexity" semiotics and teleology (both ad intra and ad extra)

Integral to Dembski's idea of specified complexity (SC) is the notion that something extrinsic to evolution is the source of the specification in how it develops. He compares SC to the message sent by space aliens in the movie "Contact." In that movie, earthbound scientists determine that radio waves originating in from somewhere in our galaxy are actually a signal being sent by space aliens. The scientists determine that these waves are a signal is the fact that they indicate prime numbers in a way that a random occurrence would not. What is interesting to me is the fact that Dembski relies upon an analogy with a sign rather than a machine. Like a machine, signs are produced by an intelligent being for the sake of something beyond themselves. Machines, if you will, have a meaning. Signs, if you will, produce knowledge. But the meaning/knowledge is in both cases something other than the machine/sign itself. Both signs and machines are purposeful or teleological

continuing the discussion with Tim in a new post

Hi Tim, I am posting my reply here, because the great blogmeister won't let me put it all in a comment. Me thinks I get your point: is it that we can name and chimps can't, so therefore we are of greater value than chimps? Naming is something above and beyond what a chimp can do, right? In other words, you are illustrating the point I am making (if I catch your drift). My argument is only a sketch, but I think adding the ability to name names, as it were, is still not enough to make the argument seem cogent. For one can still ask why we prefer being able to name over other skills had by animals but not by humans. The objector would demand a more convincing reason. The answer I have in mind is, to put it briefly, that there is something infinite about human beings in comparison with the subhuman. That "something" has to do with our ability to think of the meaning of the cosmos. Whereas one might say"He's got the whole world in His han

particular/universal event/rule

While listening to a recorded lecture on Quine's Two Dogmas of Empiricism, it occurred to me that every rule is in a way, a fact about the world. Think about baseball: from the p.o.v. of an individual player, a baseball rule is not a thing but a guide for acting and interpreting the actions of others.  But this rule, like the action it guides, is part of a concrete individual --i.e., part of an institution that has come into existence at a particular place and time, has endured and  may eventually go out of existence.  The baseball rule, as a feature of that individual, is likewise individual.  The term "baseball rule," on the one hand, links us to a unique cultural event; it can, on the other hand, name a certain type of being.  In this way, it transgresses the boundary between proper and common noun. If there were no such overlap, then we might be tempted to divide our ontology between a bunch of facts "out there" and a bunch of common nouns "in here.&qu