Skip to main content

Michael Gazzaniga's split brain, Chinese astronomers, trains that go on time, GK Chesterton

Right now I'm listening to Michael Gazzaniga's Who's in Charge, a delightful book on split-brain experiments by neurological pioneer lacking any pretentiousness but is instead, full of wit and cheer  For reasons already discussed in this blog, I wouldn't reach the same reductionist conclusions as he does.  But instead of going into those objections here, I will only point out that in basing his view of human nature, as he does, on an exceptional case he reminds me of the Chinese astronomers who noted only the irregular occurrences in the sky.  That sort of information became useful centuries later to those who were trying to understand the lawfulness of nature.  Split-brain experiments are similarly useful to one who is trying to understand human nature. But to take such experiments as the starting point is to proceed "bassackwards": the way human nature behaves most of the time is the primary source of our understanding of what sort of being we humans are.  Split brain experiments have to fit into that regularity-based account rather than vice versa.  That is, split-brain experiments can be used to falsify/confirm our hypothheses about how humans normally function, but they cannot, on their own serve as the basis of our understanding of how, on a good day, the brain operates.  As GK Chesterton says, it's the fact that trains usually go on time (if that is a fact somewhere) that's most interesting and is most worth explaining.  And it can't be explained on the basis of the fact that they are sometimes late or early.

To those who think the results of split brain experiments disprove the identity of the self, I would ask the following questions: do these results also imply that -- when the corpus callosam is intact, that there are two consciousnesses in one's skull that happen to communicate really well with each other?  If there is just one, then isn't that unity an embarrassment to materialism?  If there are two, then why not more--say, as many as there are neurons?  And why not a new self for each neuron firing?  In such a case, what would happen to our ability to make scientific observations?

Comments

Popular posts from this blog

P F Strawson's Freedom and Resentment: the argument laid out

Here is a summary and comments on the essay Freedom and Resentment by PF Strawson.  He makes some great points, and when he is wrong, it is in such a way as to clarify things a great deal.  My non-deterministic position is much better thanks to having read this.  I’ll summarize it in this post and respond in a later one. In a nutshell: PFS first argues that personal resentment that we may feel toward another for having failed to show goodwill toward us would have no problem coexisting with the conviction that determinism is true.  Moral disapprobation, as an analog to resentment, is likewise capable of coexisting with deterministic convictions. In fact, it would seem nearly impossible for a normally-constituted person (i.e., a non-sociopath) to leave behind the web of moral convictions, even if that person is a determinist.  In this way, by arguing that moral and determinist convictions can coexist in the same person, PFS undermines the libertarian argument ...

Daniel Dennett, disqualifying qualia, softening up the hard problem, fullness of vacuity, dysfunctional functionalism

Around track 2 of disc 9 of Intuition Pumps , Dennett offers what I would call an argument from vacuity.  He argues that David Chalmers unwittingly plays a magic trick on himself and others by placing a set of issues under the one umbrella called the "hard problem of consciousness." None of these issues is really , in Dennett's opinion, a hard problem.  But in naming them thus, Chalmers (says Dennett) is like a magician who seems to be playing the same card trick over and over again, but is really playing several different ones.  In this analogy, expert magicians watch what they think is the same trick played over and over again.  They find it unusually difficult to determine which trick he is playing because they take these performances as iterations of the same trick when each is  in fact different from the one that came before.  Furthermore, each of the tricks that he plays is actually an easy one, so it is precisely because they are looki...

robot/computers, awareness of causality, holism

For a purportedly cognizant machine to be aware of causality, it would seem (given how it happens with us rational animals) that being aware of its own causal interactions is a necessary condition for its being aware of how causal relations exist in nature.  But to be aware of its own causal interactions, the machine would have to have a sense of its acting as a whole, as an individual, and as being acted upon at a whole.  It would not suffice merely to register information from this or that outside source: there would have to be a sense of the whole acting and being acted upon.   It seems that such awareness requires appropriation and that machines can't do that (at least not in the precise sense that I have discussed in this blog).