I just re-read Daniel Dennett’s Consciousness Explained.  I don’t think he quite succeeds– it’s not so much that I disagree as that I think he leaves too much open.  It’s more A program for how we might someday explain consciousness

He’s best at the negative parts: demolishing the opposition.  He has the best argument against dualism I’ve seen.  Dualism has great intuitive appeal and is surprisingly hard to refute– generally it’s just dismissed, not defeated.  There’s no great obstacle to souls somehow getting information from the body, since we don’t know what the properties of soul-stuff are.  The problem is on the other end: how does the soul send its orders back to the body (e.g. to control its muscles)?  The body is matter, and matter requires energy to put in motion.  Dualism requires energy to arise out of nothing, and we don’t detect any such thing.

But that’s merely a page or two.  His main target is what he calls Cartesian materialism-– the idea that the brain presents a sort of TV display in the head watched by a Central Observer and Decisionmaker.  Many who reject dualism, even hard-headed neurologists, still talk as if there were a “Cartesian Theater” in the brain where consciousness occurs.  Dennett shows that the idea is incoherent, particularly breaking down when we look at low-level neural functions.

In philosophical tradition, introspection is sacrosanct, and that leads us to posit frankly magical things going on in our minds.  Dennett outlines an alternative, what he calls heterophenomenology: modelled on anthropology, a sympathetic outsider’s careful analysis of a phenomenological world– that of a conscious subject or even a fictional character. 

This allows him to point out where introspection is, not to mince words, wrong.  All observation, including internal observation, is subject to distortion and error.  (Geoffrey Sampson has made the same point about Chomskyan linguistics: people’s judgments about whether something is grammatical– i.e. whether they can occur in their speech– can be simply wrong; they can deny that they use or even understand some construction, then use it spontaneously.)  

The main problem with introspection is that people overstate their impressions.  For instance, we think we have a uniform hi-res visual field.  If you say it in that careful way, no problem!  But you may be tempted to say, not that you think you have a uniform hi-res visual field, but that you actually have one.  And it’s easy to show that you don’t and can’t.

Your eyes have only a tiny area that’s in clear focus– the fovea.  Look straight ahead and hold up a playing card by your ear.  You won’t be able to see what card it is, even what color… though you can easily see if it’s moving.  Move it slowly toward the center of your vision, still looking straight ahead… when can you make out its color, its suit, its  number?  It’ll be surprisingly close to center before you can identify all these things. 

Why aren’t we conscious of the visual field being this fuzzy?  Mostly because the eye is constantly, unconsciously darting about.  Anything we look directly at is detailed.  But more importantly, there is nothing in the brain that is looking at a TV screen.  We have to break the habits of thought that require re-presenting visual material to an internal homunculus. 

Dennett invites you to picture looking at wallpaper covered with images of Marilyn Monroe.  Within seconds your brain recognizes it as such… though the fovea has had only time to look at a handful of Marilyns.  You perceive that you’re looking at a wall of Marilyns, and you are.  You are not aware of any fuzziness; but there is no neural process of filling in Marilyns, because there is no full-resolution Central Image that needs to be assembled, and no Cartesian Observer to watch it.  The brain doesn’t need a filling-in process because it can use the world to store itself: if it wants more detail at a particular spot, it just looks there.

It’s even possible to use an ingenious device to fool the brain.  A computer screen is displaying a text, Text A.  It’s outfitted with an eye-tracker that checks where your fovea is pointing, and changes the text just at that location into Text B.   An outside observer would see Text A with a fast-moving flicker of changing letters much too fast to read.  But what you see is Text B, with no fuzziness or flickering.  You think you’re seeing a page of Text B, and in this case you’re wrong… if your fovea hasn’t reached certain words of B, the computer hasn’t even displayed them.

(Trying to explain this on my board, at least one person went too far the other way– they assumed that the visual field is fuzzy.  It’s not, unless your eyes are bad.  Again, just as there is no homunculus watching a sharp picture screen, there is none watching a fuzzy one.  As an analogy, think of a CEO using a data mining system with zoom-in.  At any one instant, what he gets is a single table of data, but he can zoom in whenever he pleases.  Is his picture of the data high- or low-detail?  Neither: he has neither seen all the data that exists, nor has he seen only high-level summaries.)

Dennett develops a model for how the mind works based on evolutionary history, psychological experiments, and AI.  I think on the whole he’s on the right track– it’s more or less the point of view that I expounded in my paper on Searle.  But it’s too sketchy to justify his grandiose title.  Still, it’s an advance over simple denials that these things can ever be explained.

He points out that computer scientists seem to most easily accept materialist theories of the mind– because we’re used to describing hugely complex systems at abstract levels.  Outsiders just can’t imagine how science could explain mental functions, so they don’t try.  They give up and hold that their failure of imagination proves that it can’t be done.

The weakest bits of the book are probably the sections on qualia.  Dennett seems to view mentions of qualia with professional exasperation… qualia seem to have the position in philosophy that pragmatics has for syntacticians– a trash bin into which to sweep anything we don’t propose to explain.  He rightly points out some of the problems, but his solutions don’t do much to explain what we think we experience.

For that, I recommend C.L. Hardin’s Color for Philosophers.  Hardin doesn’t explain the qualia of color… but I don’t think you can read the book and maintain the conviction that qualia can never be explained.  There are already things about color qualia that science can explain:

  • why yellow is brighter than the other colors
  • why we can easily imagine cyan as blue + green but can’t picture yellow as green + red
  • why only certain colors exist (thus, why David Lindsay’s postulation of additional primary colors is fantasy)
  • why colors shade into each other at all (as opposed to sounds, which remain distinct when presented simultaneously)
  • why languages universally have color prototypes of particular hues
  • how afterimages and certain optical illusions work

There’s no reason to think this list won’t be much larger in a century.

Advertisements