The Chinese room argument (CRA), originally invented by John Searle, was developed to show that that functionalism is wrong. Functionalists claim that computers running programs are cognizant if they can do what cognizant beings alone are thought to do. Turing, an early functionalist, proposed that a computer that could answer questions in a manner that seems human would itself be cognizant.
Searle has you imagine someone in a room reading symbols, using a rulebook for responding to them, and generating replies. The person who reads the incoming symbols etc. does not know Chinese. But supposing that the program is well-designed, the person in the room could generate answers that would satisfy persons communicating with it (through written messages) that the room or its operator is cognizant of Chinese, thereby satisfying the Turing test. But the fact that there's a little man inside doing what a CPU does but without knowing Chinese indicates that neither does a CPU, nor a computer as a whole, even if its outside is made to look and move in a human-like manner.
In a way, Searle's thought experiement is itself functionalist. For it hinges on our recognition of the behavior of the man inside the Chinese room as distinctively human (we imagine him or her as fluent in English or some other non-Chinese language). We imagine opening the door to the Chinese room and talking to the fellow reading the incoming symbols, going through the rule book written in English (or German or whatever) and typing replies. That's behavior, alright. (Actually, if the rulebook were written in Chinese, that would be fine too, as long as the manipulation of symbols did not allow the operator to know what the meaning of the incoming Chinese messages was).
Imagine (per impossibile) capturing a space alien and putting him or her (or it) on the operating table, then separating what seems to be a neuron from the rest of the brain and finding out that this seeming neuron is cognizant of its immediate environment, communicative and that it has no idea of the things outside of its immediate surroundings. It's a neural slave, if you will. I think you would quite rightly doubt the personhood of the one to whom the neuron had originally belonged.
And since we don't expect that sort of thing to happen with isolated human neurons, nor doe we expect that to happen with isolated computer parts, or with CPUs, functionalists can claim that Searle's argument misses the mark.
The way to defeat Searle's argument is get away from computer programs as symbols to be interpreted (for this treats the parts of the computer as if they were individually cognizant, as if they were wholes unto themselves) and to emphasize instead how computer parts, like neurons (and unlike the operator in the CRA) lack individual cognizance; instead, they function as parts of a whole that itself is cognizant.
In this case, might the whole be acting in a manner that is more than the sums of the actions of its parts? More than an effect of the action of the parts? Such a move would distinguish computation from the interaction that we find in the CRA. But isn't the person making such a move thinking of computers more like organisms than machines? That may or may not be true: what's interesting to me, however, is that it is a kind of testimony to the truth of the claim that humans themselves, as organisms, are not mere machines.
In other words, the moves that a reductive or eliminative materialist needs to make to defeat the CRA by attributing non-mechanistic properties to computers may undermine a reductive view of human nature.
Searle has you imagine someone in a room reading symbols, using a rulebook for responding to them, and generating replies. The person who reads the incoming symbols etc. does not know Chinese. But supposing that the program is well-designed, the person in the room could generate answers that would satisfy persons communicating with it (through written messages) that the room or its operator is cognizant of Chinese, thereby satisfying the Turing test. But the fact that there's a little man inside doing what a CPU does but without knowing Chinese indicates that neither does a CPU, nor a computer as a whole, even if its outside is made to look and move in a human-like manner.
In a way, Searle's thought experiement is itself functionalist. For it hinges on our recognition of the behavior of the man inside the Chinese room as distinctively human (we imagine him or her as fluent in English or some other non-Chinese language). We imagine opening the door to the Chinese room and talking to the fellow reading the incoming symbols, going through the rule book written in English (or German or whatever) and typing replies. That's behavior, alright. (Actually, if the rulebook were written in Chinese, that would be fine too, as long as the manipulation of symbols did not allow the operator to know what the meaning of the incoming Chinese messages was).
Imagine (per impossibile) capturing a space alien and putting him or her (or it) on the operating table, then separating what seems to be a neuron from the rest of the brain and finding out that this seeming neuron is cognizant of its immediate environment, communicative and that it has no idea of the things outside of its immediate surroundings. It's a neural slave, if you will. I think you would quite rightly doubt the personhood of the one to whom the neuron had originally belonged.
And since we don't expect that sort of thing to happen with isolated human neurons, nor doe we expect that to happen with isolated computer parts, or with CPUs, functionalists can claim that Searle's argument misses the mark.
The way to defeat Searle's argument is get away from computer programs as symbols to be interpreted (for this treats the parts of the computer as if they were individually cognizant, as if they were wholes unto themselves) and to emphasize instead how computer parts, like neurons (and unlike the operator in the CRA) lack individual cognizance; instead, they function as parts of a whole that itself is cognizant.
In this case, might the whole be acting in a manner that is more than the sums of the actions of its parts? More than an effect of the action of the parts? Such a move would distinguish computation from the interaction that we find in the CRA. But isn't the person making such a move thinking of computers more like organisms than machines? That may or may not be true: what's interesting to me, however, is that it is a kind of testimony to the truth of the claim that humans themselves, as organisms, are not mere machines.
In other words, the moves that a reductive or eliminative materialist needs to make to defeat the CRA by attributing non-mechanistic properties to computers may undermine a reductive view of human nature.
Comments