Skip to main content

A hunch about Goedel's theorum and the non-reflexivity of machines

Some thoughts on the topic:

1. preliminary examples that serve as a prelude to my first point:
1a) Think of a machine consisting of many parts, each of which pushes another: No one "pusher" in this series can push itself.
1b) Think of devices we use: The bristles of a toothbrush cannot brush the bristles themselves.
1c) Think of our own organs: One's eyes cannot see one's eyes.

The point of all of this: if we look at each of these examples as actions directed toward an object, in no case can the action be directed toward the very action itself as its object. You can't brush the brush with the very same brush, etc. To put it more simply: material actions, qua transitive, are non-reflexive.

2. If all human cognition is a machine-like material process (a claim that I am granting for the purpose of a reductio ad absurdam), then it cannot be reflexive. That is, a cognition-process, to the degree that it is like other processes, would not be able to cognizant of its own operation.
2a) the above assumes that cognition is the awareness of what most immediately causes cognition, in which case human cognition would be awareness of the non-cognitive process that causes and thereby the cognitive process.
2b) Why that assumption? Because if cognition were directed toward something other than that process which immediately causes it, then the materialist axiom would have been violated. This postulate can be demonstrated by considering the other alternative: a cognition process that is directed toward something not acting upon it would be semething akin to action at a distance. Or there would be some sort of symbolic activity going on. The object inside the cognizing organ would be a symbol of that which is outside. But in such a case one must point out the following embarrassing problem. When I take A as a symbol of B, it is because in the past I have been acquainted with B just as immediately as I am presently with A. But we never have acquaintance with anything but our own internal processes. So there would be no reason to take them as symbolic of something else. The only way a reductionist might try to escape from this problem is to posit innate symbols, since they can't be derived from external sources.

2c) Of course, one can imagine a kind of mechanical analogue to reflexivity. E.g., a loop in which a mechanical action modifies its own operation through intermediaries: e.g., through its governor, a steam engine adjusts its own revolutions per minute. The more reflexive cognition is seen to be analogous with the motor's being affected by a governor, the more appropriate the following criticism will apply to any such proposal: the cognitive process will involve the awareness not only of that which "drives" it but also of that which "governs" it. But just as the governor cannot be identified with the motor, so too the awareness of a reflexive neural loop cannot be identified with awareness of the initial neural operation. Loops are not really reflexive.

3. What if a machine exhibits [through its output] necessary truths by being programmed to behave a certain way (e.g., give the correct answer to arithmetical problems)? Consider, for example, a machine that is able to give answers consistent with the laws of arithmetic (at least as long as the machine parts function according to plan): in this case there is a process occurring prior to the cognitive-process that determines HOW the cognition-machine will behave.

4. In such a case (described by 3) the very immediate physical process that determines HOW the machine will behave will be cognized. But that process will not be a necessary truth. Rather, it is a contingent arrangement of mechanical processes. So cognizing this arrangement will not be the same as cognizing a necessary truth.

5. Furthermore, even if we grant per impossibile that the machine were cognizant of universal and necessary truths (through the immediate influence of the non-cognitive process), in such a case the machine would still not know that it knows these truths. Why? Because to cognize the very process that determines how it will cognize is akin to a tooth brush brushing itself. It is a reflexive act and machines can't do that.

6. The impossibility described in #5 might instantiate one of the implications of Goedel's theorum. That implication is that purportedly cognizant machines can't know (?) the truth of the very principles that determine how they will cognize. If one can't know that one knows, then one can't know the truth of one's convictions about mathematics. (I'm thinking now that the need to require reflexivity might not be obviously true)

6. Humans, in knowing the truth of the principles that constrain how they can think (e.g., non contradiction) can do what machines in principle cannot do.

7. Therefore, humans are not machines.

8. It is worth inquiring into how the non-mechanistic nature of human cognition is related to freedom, especially in light of the fact that non-determinism is a necessary condition for knowing necessary truths.

Comments

Unknown said…
2c is known as a positive feedback loop
Leo White said…
Yeah. I think our friend David Bradway mentioned that in our discussions... as if that were the same as reflexive awareness... NOT!
But lemmee ask: is there a negative feedback loop? What about one that is sometimes positive and at other times negative: just a feedback loop?

Popular posts from this blog

Dembski's "specified compexity" semiotics and teleology (both ad intra and ad extra)

Integral to Dembski's idea of specified complexity (SC) is the notion that something extrinsic to evolution is the source of the specification in how it develops. He compares SC to the message sent by space aliens in the movie "Contact." In that movie, earthbound scientists determine that radio waves originating in from somewhere in our galaxy are actually a signal being sent by space aliens. The scientists determine that these waves are a signal is the fact that they indicate prime numbers in a way that a random occurrence would not. What is interesting to me is the fact that Dembski relies upon an analogy with a sign rather than a machine. Like a machine, signs are produced by an intelligent being for the sake of something beyond themselves. Machines, if you will, have a meaning. Signs, if you will, produce knowledge. But the meaning/knowledge is in both cases something other than the machine/sign itself. Both signs and machines are purposeful or teleological

continuing the discussion with Tim in a new post

Hi Tim, I am posting my reply here, because the great blogmeister won't let me put it all in a comment. Me thinks I get your point: is it that we can name and chimps can't, so therefore we are of greater value than chimps? Naming is something above and beyond what a chimp can do, right? In other words, you are illustrating the point I am making (if I catch your drift). My argument is only a sketch, but I think adding the ability to name names, as it were, is still not enough to make the argument seem cogent. For one can still ask why we prefer being able to name over other skills had by animals but not by humans. The objector would demand a more convincing reason. The answer I have in mind is, to put it briefly, that there is something infinite about human beings in comparison with the subhuman. That "something" has to do with our ability to think of the meaning of the cosmos. Whereas one might say"He's got the whole world in His han

particular/universal event/rule

While listening to a recorded lecture on Quine's Two Dogmas of Empiricism, it occurred to me that every rule is in a way, a fact about the world. Think about baseball: from the p.o.v. of an individual player, a baseball rule is not a thing but a guide for acting and interpreting the actions of others.  But this rule, like the action it guides, is part of a concrete individual --i.e., part of an institution that has come into existence at a particular place and time, has endured and  may eventually go out of existence.  The baseball rule, as a feature of that individual, is likewise individual.  The term "baseball rule," on the one hand, links us to a unique cultural event; it can, on the other hand, name a certain type of being.  In this way, it transgresses the boundary between proper and common noun. If there were no such overlap, then we might be tempted to divide our ontology between a bunch of facts "out there" and a bunch of common nouns "in here.&qu