Skip to main content

Daniel Dennett "corrects" Paul Churchland

Paul Churchland is enthusiastic about the capacity of neural networks to engage in computation, and his enthusiasm bothers Daniel Dennett.  What bothers Dennett is the fact that, while noting that this capacity has been demonstrated, Churchland fails to note that this demonstration itself was done using computers.  Since computers are themselves algorithmic "deep in the engine room" (methinks that's how he put it), it seems to Dennett that, far from showing how networks are capable of doing more than can be done by algorithmic computational machines, this demonstration shows that networking itself can be achieved through algorithms (perhaps he thinks of networks as being weakly emergent).

Why is it important to DD that this modeling is rooted in algorithms?  Perhaps it is because talk of the superiority of networks to machines opens the door to systems biology, which in turn would threaten reductionism.  

Another reason might be because DD believes computers are or some day will be able to think.  Let's consider that for a moment.

It's noteworthy that other processes involving networks may likewise be modeled using computer programs:  that fact would hardly imply that the process being modeled was actually occurring in the operation of the computer.  Suppose, for example, a computer program modeled the operation of a cell.  The successful execution of that program would never be confused with the cellular operation being modeled.  The same goes for a computer program modeling an economy, or a program modeling a biome, etc.  The model and modeled are two different things.

The same distinction may be made with cognition (or the material conditions thereof): sure, we can model the operation of the neural networks underlying much cognition, but we needn't confuse the model with systems operation being modeled.

Objection:  whereas other processes in nature produce something quite different from a computer printout or an image on a computer screen, thought produces words, something that computers are likewise capable of producing.  Since they both produce the same product, the two processes must be the same.

We can illustrate this argument by contrasting the production of words with other natural processes:  cells produce waste; economies produce dollars and cents; biomes produce well-adapted organisms or the like.  Computers can't dollars, waste product, or the like, but they are quite proficient at producing words.  So the operation of computers may seem to be thought itself rather than mere models thereof.

Reply:  well, I can't answer this question right now.  So I'll instead reply to this question with another one:  how are our words related to our thoughts?  They are not the same thing, but it seems to me that the advocate of computation as thought is confusing them with each other.

Another thought: thinking is self-reflective: ever ask a computer what it was thinking about?

Yet another thought: Churchland is proposing to duplicate neural networks, whereas Dennett seems to be pointing out that computers can simulate them.  Churchland would probably point out this difference, and add that the sort of causality found in duplication is sufficient for her purposes, whereas the sort of causality found in simulation is not.  That's just my guess.

Comments

Popular posts from this blog

P F Strawson's Freedom and Resentment: the argument laid out

Here is a summary and comments on the essay Freedom and Resentment by PF Strawson.  He makes some great points, and when he is wrong, it is in such a way as to clarify things a great deal.  My non-deterministic position is much better thanks to having read this.  I’ll summarize it in this post and respond in a later one. In a nutshell: PFS first argues that personal resentment that we may feel toward another for having failed to show goodwill toward us would have no problem coexisting with the conviction that determinism is true.  Moral disapprobation, as an analog to resentment, is likewise capable of coexisting with deterministic convictions. In fact, it would seem nearly impossible for a normally-constituted person (i.e., a non-sociopath) to leave behind the web of moral convictions, even if that person is a determinist.  In this way, by arguing that moral and determinist convictions can coexist in the same person, PFS undermines the libertarian argument ...

Daniel Dennett, disqualifying qualia, softening up the hard problem, fullness of vacuity, dysfunctional functionalism

Around track 2 of disc 9 of Intuition Pumps , Dennett offers what I would call an argument from vacuity.  He argues that David Chalmers unwittingly plays a magic trick on himself and others by placing a set of issues under the one umbrella called the "hard problem of consciousness." None of these issues is really , in Dennett's opinion, a hard problem.  But in naming them thus, Chalmers (says Dennett) is like a magician who seems to be playing the same card trick over and over again, but is really playing several different ones.  In this analogy, expert magicians watch what they think is the same trick played over and over again.  They find it unusually difficult to determine which trick he is playing because they take these performances as iterations of the same trick when each is  in fact different from the one that came before.  Furthermore, each of the tricks that he plays is actually an easy one, so it is precisely because they are looki...

robot/computers, awareness of causality, holism

For a purportedly cognizant machine to be aware of causality, it would seem (given how it happens with us rational animals) that being aware of its own causal interactions is a necessary condition for its being aware of how causal relations exist in nature.  But to be aware of its own causal interactions, the machine would have to have a sense of its acting as a whole, as an individual, and as being acted upon at a whole.  It would not suffice merely to register information from this or that outside source: there would have to be a sense of the whole acting and being acted upon.   It seems that such awareness requires appropriation and that machines can't do that (at least not in the precise sense that I have discussed in this blog).