Skip to main content

Daniel Dennett "corrects" Paul Churchland

Paul Churchland is enthusiastic about the capacity of neural networks to engage in computation, and his enthusiasm bothers Daniel Dennett.  What bothers Dennett is the fact that, while noting that this capacity has been demonstrated, Churchland fails to note that this demonstration itself was done using computers.  Since computers are themselves algorithmic "deep in the engine room" (methinks that's how he put it), it seems to Dennett that, far from showing how networks are capable of doing more than can be done by algorithmic computational machines, this demonstration shows that networking itself can be achieved through algorithms (perhaps he thinks of networks as being weakly emergent).

Why is it important to DD that this modeling is rooted in algorithms?  Perhaps it is because talk of the superiority of networks to machines opens the door to systems biology, which in turn would threaten reductionism.  

Another reason might be because DD believes computers are or some day will be able to think.  Let's consider that for a moment.

It's noteworthy that other processes involving networks may likewise be modeled using computer programs:  that fact would hardly imply that the process being modeled was actually occurring in the operation of the computer.  Suppose, for example, a computer program modeled the operation of a cell.  The successful execution of that program would never be confused with the cellular operation being modeled.  The same goes for a computer program modeling an economy, or a program modeling a biome, etc.  The model and modeled are two different things.

The same distinction may be made with cognition (or the material conditions thereof): sure, we can model the operation of the neural networks underlying much cognition, but we needn't confuse the model with systems operation being modeled.

Objection:  whereas other processes in nature produce something quite different from a computer printout or an image on a computer screen, thought produces words, something that computers are likewise capable of producing.  Since they both produce the same product, the two processes must be the same.

We can illustrate this argument by contrasting the production of words with other natural processes:  cells produce waste; economies produce dollars and cents; biomes produce well-adapted organisms or the like.  Computers can't dollars, waste product, or the like, but they are quite proficient at producing words.  So the operation of computers may seem to be thought itself rather than mere models thereof.

Reply:  well, I can't answer this question right now.  So I'll instead reply to this question with another one:  how are our words related to our thoughts?  They are not the same thing, but it seems to me that the advocate of computation as thought is confusing them with each other.

Another thought: thinking is self-reflective: ever ask a computer what it was thinking about?

Yet another thought: Churchland is proposing to duplicate neural networks, whereas Dennett seems to be pointing out that computers can simulate them.  Churchland would probably point out this difference, and add that the sort of causality found in duplication is sufficient for her purposes, whereas the sort of causality found in simulation is not.  That's just my guess.

Comments

Popular posts from this blog

Dembski's "specified compexity" semiotics and teleology (both ad intra and ad extra)

Integral to Dembski's idea of specified complexity (SC) is the notion that something extrinsic to evolution is the source of the specification in how it develops. He compares SC to the message sent by space aliens in the movie "Contact." In that movie, earthbound scientists determine that radio waves originating in from somewhere in our galaxy are actually a signal being sent by space aliens. The scientists determine that these waves are a signal is the fact that they indicate prime numbers in a way that a random occurrence would not. What is interesting to me is the fact that Dembski relies upon an analogy with a sign rather than a machine. Like a machine, signs are produced by an intelligent being for the sake of something beyond themselves. Machines, if you will, have a meaning. Signs, if you will, produce knowledge. But the meaning/knowledge is in both cases something other than the machine/sign itself. Both signs and machines are purposeful or teleological

continuing the discussion with Tim in a new post

Hi Tim, I am posting my reply here, because the great blogmeister won't let me put it all in a comment. Me thinks I get your point: is it that we can name and chimps can't, so therefore we are of greater value than chimps? Naming is something above and beyond what a chimp can do, right? In other words, you are illustrating the point I am making (if I catch your drift). My argument is only a sketch, but I think adding the ability to name names, as it were, is still not enough to make the argument seem cogent. For one can still ask why we prefer being able to name over other skills had by animals but not by humans. The objector would demand a more convincing reason. The answer I have in mind is, to put it briefly, that there is something infinite about human beings in comparison with the subhuman. That "something" has to do with our ability to think of the meaning of the cosmos. Whereas one might say"He's got the whole world in His han

particular/universal event/rule

While listening to a recorded lecture on Quine's Two Dogmas of Empiricism, it occurred to me that every rule is in a way, a fact about the world. Think about baseball: from the p.o.v. of an individual player, a baseball rule is not a thing but a guide for acting and interpreting the actions of others.  But this rule, like the action it guides, is part of a concrete individual --i.e., part of an institution that has come into existence at a particular place and time, has endured and  may eventually go out of existence.  The baseball rule, as a feature of that individual, is likewise individual.  The term "baseball rule," on the one hand, links us to a unique cultural event; it can, on the other hand, name a certain type of being.  In this way, it transgresses the boundary between proper and common noun. If there were no such overlap, then we might be tempted to divide our ontology between a bunch of facts "out there" and a bunch of common nouns "in here.&qu