Paul Churchland is enthusiastic about the capacity of neural networks to engage in computation, and his enthusiasm bothers Daniel Dennett. What bothers Dennett is the fact that, while noting that this capacity has been demonstrated, Churchland fails to note that this demonstration itself was done using computers. Since computers are themselves algorithmic "deep in the engine room" (methinks that's how he put it), it seems to Dennett that, far from showing how networks are capable of doing more than can be done by algorithmic computational machines, this demonstration shows that networking itself can be achieved through algorithms (perhaps he thinks of networks as being weakly emergent).
Why is it important to DD that this modeling is rooted in algorithms? Perhaps it is because talk of the superiority of networks to machines opens the door to systems biology, which in turn would threaten reductionism.
Another reason might be because DD believes computers are or some day will be able to think. Let's consider that for a moment.
It's noteworthy that other processes involving networks may likewise be modeled using computer programs: that fact would hardly imply that the process being modeled was actually occurring in the operation of the computer. Suppose, for example, a computer program modeled the operation of a cell. The successful execution of that program would never be confused with the cellular operation being modeled. The same goes for a computer program modeling an economy, or a program modeling a biome, etc. The model and modeled are two different things.
The same distinction may be made with cognition (or the material conditions thereof): sure, we can model the operation of the neural networks underlying much cognition, but we needn't confuse the model with systems operation being modeled.
Objection: whereas other processes in nature produce something quite different from a computer printout or an image on a computer screen, thought produces words, something that computers are likewise capable of producing. Since they both produce the same product, the two processes must be the same.
We can illustrate this argument by contrasting the production of words with other natural processes: cells produce waste; economies produce dollars and cents; biomes produce well-adapted organisms or the like. Computers can't dollars, waste product, or the like, but they are quite proficient at producing words. So the operation of computers may seem to be thought itself rather than mere models thereof.
Reply: well, I can't answer this question right now. So I'll instead reply to this question with another one: how are our words related to our thoughts? They are not the same thing, but it seems to me that the advocate of computation as thought is confusing them with each other.
Another thought: thinking is self-reflective: ever ask a computer what it was thinking about?
Yet another thought: Churchland is proposing to duplicate neural networks, whereas Dennett seems to be pointing out that computers can simulate them. Churchland would probably point out this difference, and add that the sort of causality found in duplication is sufficient for her purposes, whereas the sort of causality found in simulation is not. That's just my guess.
Why is it important to DD that this modeling is rooted in algorithms? Perhaps it is because talk of the superiority of networks to machines opens the door to systems biology, which in turn would threaten reductionism.
Another reason might be because DD believes computers are or some day will be able to think. Let's consider that for a moment.
It's noteworthy that other processes involving networks may likewise be modeled using computer programs: that fact would hardly imply that the process being modeled was actually occurring in the operation of the computer. Suppose, for example, a computer program modeled the operation of a cell. The successful execution of that program would never be confused with the cellular operation being modeled. The same goes for a computer program modeling an economy, or a program modeling a biome, etc. The model and modeled are two different things.
The same distinction may be made with cognition (or the material conditions thereof): sure, we can model the operation of the neural networks underlying much cognition, but we needn't confuse the model with systems operation being modeled.
Objection: whereas other processes in nature produce something quite different from a computer printout or an image on a computer screen, thought produces words, something that computers are likewise capable of producing. Since they both produce the same product, the two processes must be the same.
We can illustrate this argument by contrasting the production of words with other natural processes: cells produce waste; economies produce dollars and cents; biomes produce well-adapted organisms or the like. Computers can't dollars, waste product, or the like, but they are quite proficient at producing words. So the operation of computers may seem to be thought itself rather than mere models thereof.
Reply: well, I can't answer this question right now. So I'll instead reply to this question with another one: how are our words related to our thoughts? They are not the same thing, but it seems to me that the advocate of computation as thought is confusing them with each other.
Another thought: thinking is self-reflective: ever ask a computer what it was thinking about?
Yet another thought: Churchland is proposing to duplicate neural networks, whereas Dennett seems to be pointing out that computers can simulate them. Churchland would probably point out this difference, and add that the sort of causality found in duplication is sufficient for her purposes, whereas the sort of causality found in simulation is not. That's just my guess.
Comments