Skip to main content

Remark by Thomas Nagel in Mind and Cosmos re ethics and evolution

Way back when I read Cosmos and Mind  and discovered that Sharon Street rejects "mind-independent" moral truths for the most interesting reason.  She says the moral theory is inconsistent with natural selection because there's no reason to think that natural selection would put us in touch with mind-independent truths. So she concludes that Darwinians, such as herself, must avoid giving an exalted status to these sorts of claims.

Some thoughts on that: first of all, wouldn't one who arrives at such a conclusion (evolution is true, therefore ethics ain't as true as we thought) about moral claims arrive at a similar conclusion regarding causal claims?  How could one, therefore, claim to know the sort of necessary truths that characterize science.  How can Sean Carroll say with confidence that Laplace is right if natural solution provides no plausible scenario in which knowing math and science would be advantageous?

I suppose one could always argue that such knowledge is just a more complicated version of particular knowledge, but that counter doesn't seem to support the type of determinism that so enamors Mr. Carroll.  Is the awareness of determinism a more complicated version of our knowledge of contingent facts?  Is the awareness of universality a more complicated version of our knowledge of particulars?

On the other hand, the problem might be resolved by clarifying "mind-independent."  Maybe this way of characterizing higher level truths is deeply flawed in a manner analogous to how Kant's talk of a "Ding-an-Sich." Maybe such truths are not to be thematized as being true independent of all minds but rather as capable of being known by all other rational minds.

Comments

Popular posts from this blog

P F Strawson's Freedom and Resentment: the argument laid out

Here is a summary and comments on the essay Freedom and Resentment by PF Strawson.  He makes some great points, and when he is wrong, it is in such a way as to clarify things a great deal.  My non-deterministic position is much better thanks to having read this.  I’ll summarize it in this post and respond in a later one. In a nutshell: PFS first argues that personal resentment that we may feel toward another for having failed to show goodwill toward us would have no problem coexisting with the conviction that determinism is true.  Moral disapprobation, as an analog to resentment, is likewise capable of coexisting with deterministic convictions. In fact, it would seem nearly impossible for a normally-constituted person (i.e., a non-sociopath) to leave behind the web of moral convictions, even if that person is a determinist.  In this way, by arguing that moral and determinist convictions can coexist in the same person, PFS undermines the libertarian argument ...

Daniel Dennett, disqualifying qualia, softening up the hard problem, fullness of vacuity, dysfunctional functionalism

Around track 2 of disc 9 of Intuition Pumps , Dennett offers what I would call an argument from vacuity.  He argues that David Chalmers unwittingly plays a magic trick on himself and others by placing a set of issues under the one umbrella called the "hard problem of consciousness." None of these issues is really , in Dennett's opinion, a hard problem.  But in naming them thus, Chalmers (says Dennett) is like a magician who seems to be playing the same card trick over and over again, but is really playing several different ones.  In this analogy, expert magicians watch what they think is the same trick played over and over again.  They find it unusually difficult to determine which trick he is playing because they take these performances as iterations of the same trick when each is  in fact different from the one that came before.  Furthermore, each of the tricks that he plays is actually an easy one, so it is precisely because they are looki...

robot/computers, awareness of causality, holism

For a purportedly cognizant machine to be aware of causality, it would seem (given how it happens with us rational animals) that being aware of its own causal interactions is a necessary condition for its being aware of how causal relations exist in nature.  But to be aware of its own causal interactions, the machine would have to have a sense of its acting as a whole, as an individual, and as being acted upon at a whole.  It would not suffice merely to register information from this or that outside source: there would have to be a sense of the whole acting and being acted upon.   It seems that such awareness requires appropriation and that machines can't do that (at least not in the precise sense that I have discussed in this blog).