In 1950, Alan Turing proposed a test for determining whether a computer program is cognizant. We can test computer programs, said Turing, on the basis of their ability to communicate in a manner that seems human. This test (called the Turing Test) is an example of functionalism, which says that if something functions like an X, then it is an X. Applied to computer programs, functionalism says that computer programs that function like cognizant beings are in fact cognizant. Or to paraphrase that in Forest Gump-like language: cognizant is as cognizant does.
There are plenty of problems with the Turing Test, but my interest right now is to point out something positive about it and to develop a point from there.
Suppose that a volunteer named Alex agrees to communicate via a keyboard/computer screen for the purpose of finding out if his interlocutor (named Cecilia) is a computer or a human. After twenty interactions, he is thoroughly convinced by Cecilia's responses that she is human, but upon reading response number 21, he easily figures out that she is not. Actually, Cecilia is not even a program but a preprogrammed list of responses, based upon careful guesswork by a group of scriptwriters who have watched Alex interact with other computers. These scriptwriters are very observavant and very, very lucky--at least for the first 20 responses, for their guesses correspond perfectly what Alex typed. Response number 21, however, is quite off base and the ruse is over.
The Turing Test tells us that when Alex said after question 19 that Cecilia is obviously human, his mistake was not merely a false prediction: rather, Alex was mistaken about what was going under the hood of his interlocutor (perhaps "bonnet" would be better than "hood," as it refers, in British parlance, both to the hood of a car and to something worn on the head). In this way, functionalism is superior to behaviorism, which treats subjects like black boxes (i.e., it simply doesn't care what goes on in there). For a behaviorist, the statement "Cecilia is human" means no more than "Cecilia is likel to give human-like responses." Functionalism, on the other hand, does not reduce the meaning to mere predictions: it is also concerned with what is going on within the subject while it is behaving as it does. The Turing test, as a functionalist hypothesis, asserts that an operating computer program (i.e., what is going on "under the hood") that is able to communicate in a human-like manner is or has a mind.
After recently reading Thomas Nagel's article ("What is it like to be a Bat?"), however, I think that I can offer further specification of what is going on under the hood.
When Alex initially thinks that Cecilia is human, he mistakenly thinks of the source of the messages that he is receiving as doing things we all to when he communicating: remembering, wondering, guessing, imagining, intending, hoping, etc. Alex errs by thinking of Cecilia as "one of us." He is not merely concerned with her disposition to produce human-like words (such a concern would could have when thinking of things to which one in no way ascribes consciousness). Rather; rather, he is concerned (to put it in language similar to Nagel's) with "what it's like" for Cecilia to be engaged in communication.
Recap, so far: in order to avoid a superficial, behavioristic account of how we recognize other conscious beings, we must join functionalists in their concern for what is going on "under the hood" of the purportedly conscious entity. That concern, however, is directed toward activities such as remembering, imagining, etc., that cannot be measured directly (although they may be correlated with measurable characteristics). At this point, however, one is no longer a functionalist.
There are plenty of problems with the Turing Test, but my interest right now is to point out something positive about it and to develop a point from there.
Suppose that a volunteer named Alex agrees to communicate via a keyboard/computer screen for the purpose of finding out if his interlocutor (named Cecilia) is a computer or a human. After twenty interactions, he is thoroughly convinced by Cecilia's responses that she is human, but upon reading response number 21, he easily figures out that she is not. Actually, Cecilia is not even a program but a preprogrammed list of responses, based upon careful guesswork by a group of scriptwriters who have watched Alex interact with other computers. These scriptwriters are very observavant and very, very lucky--at least for the first 20 responses, for their guesses correspond perfectly what Alex typed. Response number 21, however, is quite off base and the ruse is over.
The Turing Test tells us that when Alex said after question 19 that Cecilia is obviously human, his mistake was not merely a false prediction: rather, Alex was mistaken about what was going under the hood of his interlocutor (perhaps "bonnet" would be better than "hood," as it refers, in British parlance, both to the hood of a car and to something worn on the head). In this way, functionalism is superior to behaviorism, which treats subjects like black boxes (i.e., it simply doesn't care what goes on in there). For a behaviorist, the statement "Cecilia is human" means no more than "Cecilia is likel to give human-like responses." Functionalism, on the other hand, does not reduce the meaning to mere predictions: it is also concerned with what is going on within the subject while it is behaving as it does. The Turing test, as a functionalist hypothesis, asserts that an operating computer program (i.e., what is going on "under the hood") that is able to communicate in a human-like manner is or has a mind.
After recently reading Thomas Nagel's article ("What is it like to be a Bat?"), however, I think that I can offer further specification of what is going on under the hood.
When Alex initially thinks that Cecilia is human, he mistakenly thinks of the source of the messages that he is receiving as doing things we all to when he communicating: remembering, wondering, guessing, imagining, intending, hoping, etc. Alex errs by thinking of Cecilia as "one of us." He is not merely concerned with her disposition to produce human-like words (such a concern would could have when thinking of things to which one in no way ascribes consciousness). Rather; rather, he is concerned (to put it in language similar to Nagel's) with "what it's like" for Cecilia to be engaged in communication.
Recap, so far: in order to avoid a superficial, behavioristic account of how we recognize other conscious beings, we must join functionalists in their concern for what is going on "under the hood" of the purportedly conscious entity. That concern, however, is directed toward activities such as remembering, imagining, etc., that cannot be measured directly (although they may be correlated with measurable characteristics). At this point, however, one is no longer a functionalist.
Comments