Skip to main content

Old post on genes being like brains inasmuch as not mechanistic

I am wondering if it's possible to find a connection between a mechanistic view of the brain, and the notion that one needs to have one central headquarters, as it were, while associating the notion that one does not need a central headquarters to one that is non-mechanistic inasmuch as each part acts for the sake of a common good, as if they "desired" the same goal.

Of course, I would definitely have to defend the claim that mechanistic and central control view of consciousness go hand in hand.  After all, DD seems to think (at this point in my auditing/reading) that a no-central-headquarters approach is consistent with his version of materialism (which I take as being reductionist with a kind of semblance of higher/lower levels of explanation).

One important objection to my common-good approach would be that I am using metaphor.  And an important reply to that objection would be that "mechanism" is itself a metaphor.  To suppose that it isn't is to suffer from a kind of amnesia: one who suffers from that needs to do a kind of epistemic (or perhaps more suitably, "gnoseological") geneology.

In any case, while listening to the Companion to Philosophy of Biology way, way back when, I noted that more contemporary understandings of DNA avoid locking down functions as if one part is just for reading information, while another part is just the message (discrete gene?) to be read, while yet another part is just the transmitter, etc.  Instead, there is a free-flowing relationship between the parts:  they take turns doing their respective roles.  In other words, there is an analogy between this sort of free-flowing taking-turns and what goes on in the brain.

It is very likely I've posted this before: I found some google voice messages that I'm transcribing from at least a year ago.  Maybe I wrote it better this time...

Comments

Eternitatis said…
The mechanistic model you are seeking without a (single) CPU calling the shots can be seen in the dynamics of mathematical chaotic systems, and forms the basis of the Science of Complexity. Observe for example (and I would suggest you Google Images of) Mandelbrot sets with fractal dimensions. You have different "emergent" patterns arising because of invariances at all scales, sensitivity to initial conditions, and non-linearity. No one part of the set is "in charge," yet you see "emergence", adaptability, and self-organization at all levels. Now take the static rules regarding the generation of this static geometric image (Mandelbrot set) and substitute dynamic (kinematic) rules in its place. (The beauty of math is that they have the same form. One example is the Logistic map which can be scalar, vector, or matrix.) You will witness all sorts of independent parts participating, "cooperating", "self-organizing",etc. without a central controller with stability of behavior described by a strange attractor. Our mind is really just a chaotic stable dynamic system of thought-orbits with the notion of a central "I" being in control a mythical product of that emergence. That should not be a surprise since all systems in nature are ultimately chaotic.
Leo White said…
This comment has been removed by the author.
Leo White said…
Complexity presupposes adaptive components, which in turn entails teleology. Complexity doesn't justify mechanism: it merely treat the teleology of the whole as an effect of the teleology of the parts.

If that is all we are, then what appears to be a scientist doesn't really exist. Then neither does science.
Eternitatis said…
Nonsense. Adaptive components merely refer to self-referential functionality. Feedback control systems are adaptive. There are "adpative filters" called Kalman filters used on Aegis destroyers now where the filters themselves change to ever-changing environments. Computer programming languages like Lisp and Prolog have self-referential functionality built into them. There is no need for "teleology" or other mystical notions to explain emergent behavior. Self-organization, self-programming, emergent behavior have become a normal part of the landscape. It's really just an extension of first order logic to second order logic in many case. Putting feedback control systems or self-organizing mechanics in a black box and labeling it teleology is unnecessary. We have progressed beyond Aristotle in explaining complexity in this regard.
Leo White said…
Teleology is not mystical: it's common sensical. When you are not being a scientist, trying to solve everyday problems, you are very teleological.
Haven't you heard: self-referential functionality IS teleology.
Much of what you have to say seems like summaries of complex arguments that you've thought out. Since I don't know the whole version of the argument, I can't tell whether you are engaged in insightful observations or sloppy thinking. I prefer to think the former, but I think that throwing out summaries as you shows a lack of interest in having a conversation: you seem to want to talk at me about the things your already know rather than talk with me about the things we can come to know together. The former sounds tedious; the latter, exciting. I invite you to choose the latter: it's much more fun.

Popular posts from this blog

Dembski's "specified compexity" semiotics and teleology (both ad intra and ad extra)

Integral to Dembski's idea of specified complexity (SC) is the notion that something extrinsic to evolution is the source of the specification in how it develops. He compares SC to the message sent by space aliens in the movie "Contact." In that movie, earthbound scientists determine that radio waves originating in from somewhere in our galaxy are actually a signal being sent by space aliens. The scientists determine that these waves are a signal is the fact that they indicate prime numbers in a way that a random occurrence would not. What is interesting to me is the fact that Dembski relies upon an analogy with a sign rather than a machine. Like a machine, signs are produced by an intelligent being for the sake of something beyond themselves. Machines, if you will, have a meaning. Signs, if you will, produce knowledge. But the meaning/knowledge is in both cases something other than the machine/sign itself. Both signs and machines are purposeful or teleological

continuing the discussion with Tim in a new post

Hi Tim, I am posting my reply here, because the great blogmeister won't let me put it all in a comment. Me thinks I get your point: is it that we can name and chimps can't, so therefore we are of greater value than chimps? Naming is something above and beyond what a chimp can do, right? In other words, you are illustrating the point I am making (if I catch your drift). My argument is only a sketch, but I think adding the ability to name names, as it were, is still not enough to make the argument seem cogent. For one can still ask why we prefer being able to name over other skills had by animals but not by humans. The objector would demand a more convincing reason. The answer I have in mind is, to put it briefly, that there is something infinite about human beings in comparison with the subhuman. That "something" has to do with our ability to think of the meaning of the cosmos. Whereas one might say"He's got the whole world in His han

particular/universal event/rule

While listening to a recorded lecture on Quine's Two Dogmas of Empiricism, it occurred to me that every rule is in a way, a fact about the world. Think about baseball: from the p.o.v. of an individual player, a baseball rule is not a thing but a guide for acting and interpreting the actions of others.  But this rule, like the action it guides, is part of a concrete individual --i.e., part of an institution that has come into existence at a particular place and time, has endured and  may eventually go out of existence.  The baseball rule, as a feature of that individual, is likewise individual.  The term "baseball rule," on the one hand, links us to a unique cultural event; it can, on the other hand, name a certain type of being.  In this way, it transgresses the boundary between proper and common noun. If there were no such overlap, then we might be tempted to divide our ontology between a bunch of facts "out there" and a bunch of common nouns "in here.&qu