Daniel Dennett blisteringly reviewed Sam Harris’s Free Will, and that led to an interesting discussion at Mefi.

Does your theory of mind allow you to enjoy this pizza?

Does your theory of mind allow you to enjoy this pizza?

I read Dennett’s Elbow Room: The Varieties of Free Will Worth Wanting, which I found a pretty convincing takedown of the objections to free will.  Most of them are based on poor analogies:

  • To be unfree is normally to be under someone else’s control: you are a prisoner, or the subject of a dictator.  Obviously this is a good model if you are in fact a prisoner, but if not, not.  Whatever causes our actions, it isn’t another agent.
  • He talks about a type of wasp (Sphex spp.) which goes through an elaborate procedure to get prey insects to feed its young.  It’s pretty easy to mess with its little mind– e.g. after moving the insect into position, it inspects its nest.  If an experimenter moves the insect, the wasp will move it back– but this resets its program; it has to inspect its nest again.  You can keep playing this game indefinitely.  Dennett suggests that anti-free-will arguments are often aimed at “sphexishness”– we are not the smart adaptable agents we think we are.  Yet it’s clear that we’re far above the wasp’s level.
  • Or: you’re controlled by an inner computer program that will spit out the same results no matter what you do.  But you know, not all programs consist of one invariable line

    Programs can be highly sophisticated and highly responsive to the world.  It’s Searle’s old error.  Computers are dumb and deterministic; computer programs can be smart and unpredictable.

The way I’d put it is: if you want a pizza tonight, you can have one.  Well, something external might stop you– you’re out of money, you’re in a space station, your friends hate pizza.  But when nothing external stops you from having that pizza, you can totally have it.  That’s the only variety of free will you need.

Dennett is a “compatibilist”, meaning that he thinks determinism and free will are compatible.  I’m not, but only because determinism is wrong. For nearly a century, we’ve known that the world is non-deterministic; deal with it.  Try a two-slit experiment and predict where you’ll detect a given photon– it can’t be done.  There was a hope that “hidden variables” would restore determinism, but they don’t work either.  And “many worlds” don’t help either– the “many worlds” don’t let you predict where the photon will be detected.

This isn’t to say that I think free will is somehow saved by or depends on quantum randomness.  I don’t see why it would.  It just means that the problem people are worried about– that brain state X determines that mind state Y will happen– is not really there.  And it makes nonsense of hand-wringing about whether you could have done differently based on repeating that brain state.  Dennett argues that people are unnecessarily scrupulous about this question– all you need is the assurance that in similar brain states X’, X”, X”’, etc., some of them lead to pizza and some don’t.  But I think that since determinism is wrong, this way of looking at the problem is simply useless.

Now, for many people, the real point is that they think you’re unfree because something in your brain determines everything you do.  Something besides ‘you’, they mean.

In a sense, they’re completely right.  For instance: I wrote a novel!  Or did I?  Depends on what ‘I’ refers to.  It certainly wasn’t someone else; it came out my personal brain.  But if ‘I’ refers to my conscious mind– well, I feel like I wrote it, but most of it was put together, I know not how, by my subconscious.  I like David Eagleman’s metaphor of consciousness as a lousy CEO who habitually takes credit for his underlings’ accomplishments.

When you start looking at the brain, you start finding disturbing things.  E.g. if you ask people to move their arms at a moment of their own choice, the impulses to move the arm start as much as a second before the moment they tell you they decided to move it.  No wonder brain scientists, like Eagleman, tend to want to throw out free will, and often consciousness with it.

The problem I have with this position is that people are fatally vague over what kind of causation they’re talking about, and what level they want to describe actions at.  They seem to want to treat the mind as a physics problem.  It’s not a physics problem.  You will never explain your decision to order a pizza in terms of electrons and quarks.  Nor atoms and molecules.  Nor neurons and neurotransmitters (which I assume is what they mean by “brain states”).

Reductionism is basic to science, but it does not consist of explaining everything in terms of quantum mechanics.  A few things can be explained that way, but most things– evolution, plate tectonics, language, Keynesian economics, the fall of Rome– cannot.  These need to be explained at a higher level of abstraction, even in a reductionist, non-dualist, pseudo-deterministic universe.

This may be easier to see with computer programs. Computers actually work with voltage differences and vast arrays of tiny semiconductors.  This is of approximately zero use in understanding a program like Eliza, or Deep Blue, or Facebook.  Actual programming is mostly done at the level of algorithms, with forays downward into code optimization and upward into abstract data structures.

What level do we describe human actions at?  We don’t know, and that’s the problem.  Again, I’ll guarantee you that it isn’t at the level of individual neurons– we have tens of billions of them; explaining the mind with neurons would be like explaining a computer program with semiconductors.

Of course, the subjective picture we sometimes have– that ‘I’ am a single thing, an agent– is wrong too.  We even recognize this in common speech, using metaphors of the mind as a congress of sub-personalities– Part of me wants pizza and part of me wants gyros; I’m torn about this proposal; his id is stronger than his superego; she’s dominated by greed.

With the computer, we can precisely identify and follow the algorithms.  With the brain, we only have vague guidance upward from neurology, and even vaguer (and highly suspect) notions downward from introspection.  We don’t know the right actors and agents that make up our minds; it’s quite premature to decide that we know or don’t know that we have “free will”.

For what it’s worth, my opinion is that our consciousness is pretty much what it seems like it is: an evolved brain function that is exposed to a wide range of brain inputs (internal and external) and uses them to make executive decisions.  This is something like Dennett’s view in Consciousness Explained.

Ironically, since computers are a favorite metaphor for philosophers, the brain is a pretty bad computer.  Brains neglected to evolve simple, generalizable, fast arithmetic and logic units like computers.  One purpose of consciousness might be to supply a replacement: language allows us to write algorithms to affect ourselves and those around us.

However, the real takeaway here should be to ask yourself, if you don’t believe in free will, what you think you’re missing.  All too often it turns out to be something we don’t really need: a dualistic Cartesian observer; an agent that acts with pure randomness; an agent whose behavior is determined by impossible replications of brain state; an agent that suffers no causation at all.