[promising but fuzzy]
Suppose there's a planet in a galaxy in which there are rational animals that seek survival on a day by day basis by competing with others in a manner strangely similar to the moves of a chess game on ours. They would employ a computer called Baby Blue to help them manage their struggles successfully. Baby Blue wouldn't know that it's solving this problem any more than the earthbound computer (named Deep Blue) that beat Gasparov on earth knew that it was playing a game called chess. This would be true even if both computers had the same hard and software. In fact, there might be a vast number of situations (let's say each is on a different planet) in which the same program might function quite well. So it doesn't seem that knowing that a computer serves this or that function helps us know what the computer is thinking. And that might be because the computer is not so much a thinker as an instrument that we use to think.
(The weird thing is: Daniel Dennett admits the lack of fixity of function [see disc 5, track 5].)
This problem also underscores the impossibility of functionalism to account for truth-consciousness. For there is a kind of Weltglaube in truth consciousness that comes into play when we differentiate playing chess from doing something else. Blue doesn't need to know that difference in order to perform its function, and that is because it lacks Weltglaube. Without the Weltglaube that is a necessary condition for truth-consciousness, how can Baby Blue or Deep Blue be said to know what they are doing?
Suppose there's a planet in a galaxy in which there are rational animals that seek survival on a day by day basis by competing with others in a manner strangely similar to the moves of a chess game on ours. They would employ a computer called Baby Blue to help them manage their struggles successfully. Baby Blue wouldn't know that it's solving this problem any more than the earthbound computer (named Deep Blue) that beat Gasparov on earth knew that it was playing a game called chess. This would be true even if both computers had the same hard and software. In fact, there might be a vast number of situations (let's say each is on a different planet) in which the same program might function quite well. So it doesn't seem that knowing that a computer serves this or that function helps us know what the computer is thinking. And that might be because the computer is not so much a thinker as an instrument that we use to think.
(The weird thing is: Daniel Dennett admits the lack of fixity of function [see disc 5, track 5].)
This problem also underscores the impossibility of functionalism to account for truth-consciousness. For there is a kind of Weltglaube in truth consciousness that comes into play when we differentiate playing chess from doing something else. Blue doesn't need to know that difference in order to perform its function, and that is because it lacks Weltglaube. Without the Weltglaube that is a necessary condition for truth-consciousness, how can Baby Blue or Deep Blue be said to know what they are doing?
Comments