philosophy


First of all, I recognize completely how ironic it is that I ask you this a few months after I asked you about the risk that the world might be destroyed. That said…

There seems to be an idea among right-wingers that usually doesn’t get stated directly, probably because it is so unattractive, but that seems to play an important role in the attitudes of many of them. It’s the idea that life needs to suck, at least to some extent, in order to motivate people to achieve things. 

Now, what if that idea is true? It won’t help much to point out that those on the Right who hold that idea are often hypocrites who don’t want their own lives to suck – after all, the statement “murder is bad” is true even if it is said by a murderer.

It does seem to be true, after all, that in wealthy countries with halfway functioning social safety nets, the really unpleasant jobs are usually done by recent migrants from poorer countries without functioning social safety nets. You yourself have pointed out that historically many sons of kings were pretty worthless. And on a personal note, I was raised in the late 20th century in one of the world’s wealthier countries, and I could never imagine myself doing the regular work of, for instance, an average present day Chinese factory worker. 

Saying that similar complaints were heard in earlier times won’t help much, either – as the above examples show, arguably those “warnings” have “come true”. So, what would happen if all countries in the world ended up relatively wealthy? Where would the migrants to do the really unpleasant jobs come from, then?

—Raphael

First, you’re not the only one to have believed that conservatives want the world to suck. George Lakoff covers this in depth in Moral Politics. Describing the conservative worldview: “The world is a dangerous place. Survival is a major concern and there are dangers and evils lurking everywhere, especially in the human soul.” Strict moral discipline (he continues) is required to survive, and harsh punishment is valuable. Without struggle, “there is no source of reward for self-discipline, no motivation to become the right kind of person.” (His book was from 1996; here’s his more up-to-date thoughts on the election.)

Now, this is essentially a millennia-old response to the problem of evil. I discussed it in the context of the Incatena here, stating it as a problem for the social planner and for God. To put it as convincingly as possible: people who get all what they want and more get spoiled. They may be vaguely benevolent, but have little empathy and no idea of sacrifice or heroism. Those who have overcome suffering are not only stronger but have a better moral character. We might well worry if everyone could live like the children of the super-rich, they would be either weak nothings (Wells’s Eloi) or hedonistic simpletons (Huxley’s Brave New World).

There is, by the way, a left-wing version of this view. The communists, especially the ones who actually organized factory labor or peasants, liked to paint the socialists and democrats as soft and weak, and turned “bourgeois” into  slur. This was taken to an extreme by Maoism, which was forged in the ordeal of the Long March, and cheerfully sent millions of students to labor in the fields. (There’s also a much weaker, but much more widespread,  view that people should live in rural communes or something.)

You’re right that it’s not a complete answer to say that those who advocate this worldview don’t want it for themselves or their children. But it is a partial answer. This worldview is congenial to the powerful— it justifies permanent injustice and absolves them of any need to ameliorate it. That’s a strong reason to distrust it.

Not coincidentally, the suffering-is-good view primarily targets the poor, women, and religious or sexual minorities.  If suffering is good, shouldn’t its advocates want it to be equally distributed? And if suffering produces good moral character, isn’t it curious that the advocates believe that they, the non-suffering, are the moral ones? Shouldn’t those who suffer the most be the most moral?

But we can also attack the claim directly. Suffering doesn’t build character.  Suffering just makes people miserable. When we don’t have an ideology that makes us sympathize with the oppressors, we see this clearly: Mao, for instance, twice destroyed the prosperity of his own revolution, killed millions of people, and wasted the lives of an entire generation.

Plus, though it’s an old moral lesson that hedonism is bad for you, it’s an even older and more basic moral lesson that participating in injustice is wrong. Even if it’s morally uplifting to get robbed, that hardly means that a moral person should be a robber. The world is a dangerous place, but a policy of adding to its dangers doesn’t make someone a moral paragon, but a sociopath.

It’s hard to deny that life for most people, not just in the global North, is better than it was a thousand years ago. Premodern agricultural kingdoms really did suck for 90% of the population. Even the strictest conservative doesn’t exactly want to bring back slavery, trial by ordeal, the Black Plague, nomad invasions, foot-binding, and the constant warfare and cruelty favored by kings. (If you’re dealing with a Christian conservative, ask them if they think Jesus should have left the world in paganism.)

But if you’ve conceded that some suffering should be eliminated, you can hardly object to removing more suffering, except by offering a further and better argument. If ending slavery was good, why not eliminate racism too? In practical terms the argument is really not “all suffering is good”, but “the suffering that generally existed in my childhood is the right amount of suffering”.  That could be the case, but such amazing temporal coincidences are not very convincing.

Also, whether or not suffering has good moral effects, we’re not really not on the verge of a great suffering shortage. There’s still plenty to go around. The 21st century is going to be challenging, not least because there is, oh, the prospect of total ecological collapse. So there is really no need to increase local suffering by, say, removing everyone’s health insurance.

But there is a conworlding exercise here, and I’ll take the bait and consider it. If we could solve our ecological problems and the right wing totally imploded, we could create a world that is both prosperous and egalitarian. Should we worry about people becoming spoiled?

As Lakoff would say, this is in part a framing problem. If we’re creating an ideal society, of course we don’t want “spoiled” people. As progressives, we want people to be nurturing and empathetic instead. If they’re not, we didn’t design very well. But it begs the question to suggest that the design solution is “more suffering”. Suffering isn’t the best way of producing empathy anyway; better to model it and teach it directly.

A deeper answer: as people move up Maslow’s hierarchy of need, they develop new and different concerns and disputes. Are Germans of 2016 “more spoiled” than those of 1016? They’re far richer, but surely we couldn’t say that they’re all spoiled like rich children. If anything, a certain level of material ease facilitates spirituality: you can read, meditate, study, give to the poor. In most religious traditions, a simple lifestyle is a virtue— but being born to it is generally not enough. Being a wandering monk is a choice and meritorious; being a wandering beggar is generally neither.

We can call the average German of 2016 “rich” compared to the one from 1016, but that hardly means that she thinks or acts like a rich man of 1016. If our civilization survives until 3016 and attains a general prosperity, the people of 3016 will be “rich” by our standards, but not by their own, and there’s no particular reason to assume that they will act like today’s rich people (or their spoiled children).

As for unpleasant jobs, I don’t see that as an unsolvable problem. In general, tedious jobs are also the ripest for automation. In advanced countries 99% of people don’t work in the fields. But those who really like that kind of lifestyle can take it.

This paragraph is amazing:

Once upon a time there was a monk who was inclined to imagine things rather a lot. One day, he happened to imagine a man named Jivata, who drank too much and fell into a heavy sleep.  As Jivata dreamt, he saw a Brahmin who read all day long. One day, that Brahmin fell asleep, and as his daily activities were still alive within him, like a tree inside a seed, he dreamt that he was a prince. One day that prince fell asleep after a heavy meal, and dreamt that he was a great king. One day that king fell asleep, having gorged himself on his every desire, and in his dream he saw himself as a celestial woman. The woman fell into a deep sleep in the languor that followed making love, and she saw herself as a doe with darting eyes. That doe one day fell asleep and dreamed that she was a clinging vine, because she had been accustomed to eating vines; for animals dream too, and they always remember what they have seen and heard.

This is from the Yogavasishtha, written sometime between the 10th and 12th centuries; the translation is by Wendy Doniger in On Hinduism.

Where do you go after a paragraph like that?  Anywhere you like.  But here’s how it goes.

The vine saw herself as a bee that used to buzz among the vines; the bee fell in love with a lotus and was so intoxicated by the lotus sap he drank that his wits became numb; just then an elephant came to that pond and trampled the lotus, and the bee, still attached to the lotus, was crushed with it on the elephant’s tusk. As the bee looked at the elephant, he saw himself as an elephant in rut. That elephant in rut fell into a deep pit and became the favorite elephant of a king. One day the elephant was cut to pieces by a sword in battle, and as he went to his final resting place he saw a swarm of bees hovering over the sweet ichor that oozed from his temples, and so the elephant became a bee again. The bee returned to the lotus pond and was trampled under the feet of another elephant, and just then he noticed a goose beside him in the pond, and so he became a goose. That goose moved through other births, other wombs, for a long time; until one day, when he was a goose in a flock of other geese, he realized that, being a goose, he was the same as the swan of the Creator. Just as he had this thought, he was shot by a hunter and he died, and then he was born as the swan of the Creator.

One day the swan saw Rudra and thought, with sudden certainty, “I am Rudra.” Immediately that idea was reflected like an image in a mirror, and he took on the form of Rudra. Then he could see all of his former experiences, and he understood them: “Because Jivata admired Brahmins, he saw himself as a Brahmin; and since the Brahmin had thought about princes all the time, he became a prince. And that fickle woman was so jealous of the beautiful eyes of a doe that she became a doe… These creatures are my own rebirths.” And, after awhile, the monk and Jivata and all the others will wear out their bodies and will unite in the world of Rudra.

(Rudra is better known as  Shiva; in this tradition, he is the supreme god.)

So the interlocking dreams turn into a transference of souls just by imagination, and then into the cycle of rebirth.  And it ends up as a playful, vivid demonstration of the idea of pantheism– we’re all forms of Shiva, but just don’t realize it.

Still, it’s the little details that create the intense dreaminess of the passage: Jivata’s drunken stupor, the celestial woman making love, the bee’s infatuation with lotus sap. (As Doniger points out, the common element running through the dream is desire.)

 

cyranoFirst, read this neat article on “cyranoids”.  The semi-stupid name is based on the play Cyrano de Bergerac; the title character provides the words to woo a woman, Roxane, on behalf of an inarticulate friend. This does not work out well.

(Linguistic note: Roxane is one of the few names we borrow from Ancient Persian; Rokhsāna was the Persian wife of Alexander the Great.)

In the contemporary experiment, it works great. Subjects are introduced to a 12-year-old boy and encouraged to talk to him; in fact all his words are provided by a 37-year-old professor via a radio receiver in his ear.  People didn’t suspect, despite the boy’s evident deep knowledge of European politics and Dostoevsky.  The reverse substitution– the professor being given lines by the 12-year-old– worked just as well.

This is mildly surprising, but as the article notes, we didn’t evolve in a situation where people are being remote-controlled by someone else.

If you want to make a billion dollars, my advice is, monetize this. My prediction is that in a hundred years, or perhaps in the Incatena, this will be commonplace.  Some easy applications:

  • Learning seduction, as in the play. Or salesmanship, or politics, or law– anything that requires verbal eloquence and social skills.
  • Teaching: channel a better teacher, or call on one when you’re stumped.
  • Politics: respond to challenges better than you could with your own brain. Never make gaffes or forget someone’s name!
  • Business deals or ambassadorships: send a human for the face-to-face interaction; control them from the head office during the hard negotiations.
  • Real-life avataring: try out life in a different race or gender.
  • Acting: never forget your lines!
  • Interviewing: send out someone handsomer / prettier (or who merely lives in the area).
  • Confrontations: get expert words when you need to stand up to someone who stresses you out.
  • Police or detective work, or journalism: do routine in-person investigations without people recognizing your face or voice.
  • Management: micromanage your employees’ very words!
  • Sex: imagine the possibilities for role-playing or dominance. Also a nice loophole: swap spouses without physically doing so.

The obvious difficulty is the pause while the avatar receives the other person’s instructions. The Wired article isn’t clear on how this was handled, but there are ways to stall for time imperceptibly; also, perhaps, the controller could go phrase-by-phrase instead of sentence-by-sentence. Possibly, with practice, the avatar could acquire the simultaneous translator’s ability to listen and speak at the same time.

The avatar also needs the acting ability needed to bring someone else’s words to life. However, this is a lot easier if you’ve just heard someone saying the words in your ear– it’s far easier than trying to bring a written text to life. (Still, there are people who can hear something and just can’t reproduce the intonation… I recall my high school drama teacher trying to coach a wooden student actor; it was excruciating.)

Would people feel alienated and suspicious if they knew that the people they talk to might be using such services? I don’t think so, any more than we’re weirded out by the fact that small metal devices issue out human-sounding words. If anything, people would probably be surprised if someone– a politician or an interviewee– turned out not to be using an expert in their ear.

More interestingly, it might be that people retreat a bit from our present-day absolute individualism. In ancient times, or in certain other cultures, it was assumed that gods or demons might speak inside your head. (The Romans believed that a spirit called a “genius” dictated ideas to people; we’ve kept the world but absorbed the spirit as part of our notion of the self.) Maybe in such a world, the idea that you had to come up with your own words to speak would seem as strangely burdensome as thinking that everyone had to cook their own meals.

Edit: A Twitter conversation pointed out that I may not have communicated that the idea is kinda creepy. And it is! But then, cel phones can be kinda creepy too (as you may notice if you try to have a RL conversation with someone who can’t keep messing with theirs). I suspect if the option was available, though, it’d be used in some of the ways described above.

I’ve got another lukewarm recommendation for you!  I just finished Steven Pinker’s How the Mind Works. Pinker, like Daniel Dennett, doesn’t lack for ambition. He really wants to tell you how to design a functioning mind, or to be precise, how evolution has put ours together.

1671_the_brain_cube_inuse

His focus throughout is on evolution, so a basic restraint is that the components of the mind should have increased reproductive success. Not absolutely– we obviously use our brains in many ways that couldn’t be adaptations.  But it’s a good restraint to have, as it keeps him from accepting simplistic ideas that “intelligence is good” or that evolution is aiming at creating humanoids. (There’s a major caveat here, though: adaptation is only one process in evolution, and you have to make a case that it produced any particular feature. More on this later.)

Does he succeed?  In parts, brilliantly.  The chapter on vision is excellent. He explains exactly why vision is such a hard problem, and how the eyes and brain could put together a view of the world. Cognitive research is frustratingly indirect– we can’t really see how the software runs, so to speak. But we can put a whole bunch of clues together: how the eye works, what goes wrong when the brain is damaged, what constraints are suggested by optical illusions and glitches, how people respond to cleverly designed experiments.

As just one example, it seems that people can rotate visual images, as if they have a cheap, slow 3-D modeling program in their heads– and that this rotation takes time; certain tasks (like identifying whether two pictures depict the same object) take longer depending on the amount of rotation required. But even stranger, it’s found that people don’t just store one view of an object.  They can store several views, and solve rotation problems by rotating the nearest view. This is fascinating precisely because it’s not a solution that most programmers would think of. It makes sense for brains, which basically allow huge data stores but limited computational power.

He points out that vision is not only a difficult problem, it’s impossible.  If you see two lines at a diagonal in front of you, there is no way to determine for sure whether they’re really part of a triangle, or parallel lines moving away from you, or a random juxtaposition of two unrelated lines, and so on. The brain solves the impossible problem by making assumptions about the world– e.g. observed patches that move together belong to the same object; surfaces tend to have a uniform color; sudden transitions are probably object boundaries, and so on. It works pretty well out in nature, which is not trying to mislead us, but it’s easy to fool.  (E.g., it sure looks like there’s a hand holding a brain-patterned Rubik’s cube up there, doesn’t it? Surprise, it’s a flat computer screen!)

I also like his chapters on folk logic and emotions, largely because he defends both.  It’s easy to show that people aren’t good at book logic, but that’s in part because logicians insist on arguing in a way that’s far removed from primate life. A classic example involves the following query:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. What is the probability that Linda is a bank-teller? What is the probability that Linda is a bank-teller and is active in the feminist movement?

People often estimate that it’s more likely that Linda is a feminist bank teller, than that she’s simply a bank teller. This is wrong, by traditional logic: A ∧ B cannot be more probable than B. But all that really tells us is that our minds resist the nature of Boolean logic, which considers only the form of arguments, not their content. We love content. People’s judgments make narrative sense.  From the description of Linda, it’s clear that she’s a feminist, so a description that incorporates her feminism is more satisfying. In normal life it’s anomalous to include a bunch of information that’s irrelevant to your question.

As for emotions, it’s widely assumed that they’re atavistic, unnecessary, and positively dangerous– an AI is supposed to be emotionless, like Data. Pinker makes a good design case for emotions. In  brief, a cognitive being needs something to make it care about doing A rather than B… or doing anything at all. All the better if that something helps it avoid dangers, reproduce itself, form alliances, and detect cheaters.

So, why do I have mixed feelings about the book? A minor problem is breeziness— for instance, Pinker addresses George Lakoff’s category theory in one paragraph, and pretty spectacularly misses Lakoff’s argument. He talks about categories being “idealized”, as if Lakoff had overlooked this little point, rather than discussing it extensively. And he argues that the concept of “mother” is well defined in biology, which completely misses the distinction between ordinary language and technical terms. He similarly takes sides in the debate between Napoleon Chagnon and Marvin Harris, with a mere assertion that Chagnon is right. He rarely pauses to acknowledge that any of his examples are controversial or could be interpreted a different way.

More seriously, he doesn’t give a principled way to tell evolutionary from cultural explanations. This becomes a big problem in his chapter on sex, where he goes all in on evolutionary psychology. EP is fascinating stuff, no doubt about it, and I think a lot of it is true about animals in general. But which parts apply to humans is a much more contentious question.  (For a primer on problems with EP, see Amanda Schaffer’s article here, or P.Z. Myer’s takedown, or his direct attack on Pinker, or Kate Clancy’s methodological critique.) Our nearest ancestors are all over the map, sexually: gorillas have harems, chimpanzees have male dominance hierarchies with huge inter-chimp competition for mates (and multiple strategies); bonobos are notorious for female dominance and casual sex.  With such a menu of models, it’s all to easy to project sexist fantasies into the past. Plus, we know far less about our ancestors than we’d like, they lived in incredibly diverse environments, and evolution didn’t stop in 10,000 BC.

Plus there’s immense variety in human societies, which Pinker tends to paper over.  He often mentions a few favorite low-tech societies, but all too often he generalizes from 20C Americans.  E.g. he mentions perky breasts as a signal of female beauty… um, has he ever looked at an Asian girl, or a flapper, or a medieval painting?  Relatedly, Pinker convinces himself that men should be most attracted to a girl who is post-puberty but has never been pregnant, because she’s able to bear more children than an older woman. Kate Clancy’s takedown of hebephilia is relevant here: she points out that girls who bear children too early are likely to have less children overall, and that male chimps actually prefer females who have borne a child.

Finally, the last chapter, on art and religion, is just terrible. He actually begins the discussion of art with an attack on the supposed elitism of modern art. It’s like he’s totally forgotten that he’s supposed to be writing about evolution; what the hell does the supposed gulf between “Andrew Lloyd Webber [and] Mozart”, two Western artists separated by an evolutionary eyeblink, have to do with art in general? Couldn’t he at least have told us some anecdotes about Yanomamo art?

As for religion, he describes it as a “desperate measure… inventing ghosts and bribing them for good weather”. Srsly? Ironically, earlier in the book, he emphasizes several times that the facts about the ancestral environment are not normative, that we can criticize them ethically.  Then when it comes to religion he forgets that there’s such a thing as ethics; he just wants to make fun of the savages for their “inventions”. (I could go on all day about this blindness, but as just one point, the vast majority of believers, in any religion, invent nothing.  They accept what they’re told about the world, a strategy that is not exactly foreign to Pinker– why else does he supply a bibliography?)

On the whole, it’s probably a warning to be careful when you’re attempting a synthesis that wanders far outside your field. It might be best to skip the last two chapters, or just be resigned to a lot of eye-rolling.

Daniel Dennett blisteringly reviewed Sam Harris’s Free Will, and that led to an interesting discussion at Mefi.

Does your theory of mind allow you to enjoy this pizza?

Does your theory of mind allow you to enjoy this pizza?

I read Dennett’s Elbow Room: The Varieties of Free Will Worth Wanting, which I found a pretty convincing takedown of the objections to free will.  Most of them are based on poor analogies:

  • To be unfree is normally to be under someone else’s control: you are a prisoner, or the subject of a dictator.  Obviously this is a good model if you are in fact a prisoner, but if not, not.  Whatever causes our actions, it isn’t another agent.
  • He talks about a type of wasp (Sphex spp.) which goes through an elaborate procedure to get prey insects to feed its young.  It’s pretty easy to mess with its little mind– e.g. after moving the insect into position, it inspects its nest.  If an experimenter moves the insect, the wasp will move it back– but this resets its program; it has to inspect its nest again.  You can keep playing this game indefinitely.  Dennett suggests that anti-free-will arguments are often aimed at “sphexishness”– we are not the smart adaptable agents we think we are.  Yet it’s clear that we’re far above the wasp’s level.
  • Or: you’re controlled by an inner computer program that will spit out the same results no matter what you do.  But you know, not all programs consist of one invariable line
    10 PRINT "HELLO WORLD"

    Programs can be highly sophisticated and highly responsive to the world.  It’s Searle’s old error.  Computers are dumb and deterministic; computer programs can be smart and unpredictable.

The way I’d put it is: if you want a pizza tonight, you can have one.  Well, something external might stop you– you’re out of money, you’re in a space station, your friends hate pizza.  But when nothing external stops you from having that pizza, you can totally have it.  That’s the only variety of free will you need.

Dennett is a “compatibilist”, meaning that he thinks determinism and free will are compatible.  I’m not, but only because determinism is wrong. For nearly a century, we’ve known that the world is non-deterministic; deal with it.  Try a two-slit experiment and predict where you’ll detect a given photon– it can’t be done.  There was a hope that “hidden variables” would restore determinism, but they don’t work either.  And “many worlds” don’t help either– the “many worlds” don’t let you predict where the photon will be detected.

This isn’t to say that I think free will is somehow saved by or depends on quantum randomness.  I don’t see why it would.  It just means that the problem people are worried about– that brain state X determines that mind state Y will happen– is not really there.  And it makes nonsense of hand-wringing about whether you could have done differently based on repeating that brain state.  Dennett argues that people are unnecessarily scrupulous about this question– all you need is the assurance that in similar brain states X’, X”, X”’, etc., some of them lead to pizza and some don’t.  But I think that since determinism is wrong, this way of looking at the problem is simply useless.

Now, for many people, the real point is that they think you’re unfree because something in your brain determines everything you do.  Something besides ‘you’, they mean.

In a sense, they’re completely right.  For instance: I wrote a novel!  Or did I?  Depends on what ‘I’ refers to.  It certainly wasn’t someone else; it came out my personal brain.  But if ‘I’ refers to my conscious mind– well, I feel like I wrote it, but most of it was put together, I know not how, by my subconscious.  I like David Eagleman’s metaphor of consciousness as a lousy CEO who habitually takes credit for his underlings’ accomplishments.

When you start looking at the brain, you start finding disturbing things.  E.g. if you ask people to move their arms at a moment of their own choice, the impulses to move the arm start as much as a second before the moment they tell you they decided to move it.  No wonder brain scientists, like Eagleman, tend to want to throw out free will, and often consciousness with it.

The problem I have with this position is that people are fatally vague over what kind of causation they’re talking about, and what level they want to describe actions at.  They seem to want to treat the mind as a physics problem.  It’s not a physics problem.  You will never explain your decision to order a pizza in terms of electrons and quarks.  Nor atoms and molecules.  Nor neurons and neurotransmitters (which I assume is what they mean by “brain states”).

Reductionism is basic to science, but it does not consist of explaining everything in terms of quantum mechanics.  A few things can be explained that way, but most things– evolution, plate tectonics, language, Keynesian economics, the fall of Rome– cannot.  These need to be explained at a higher level of abstraction, even in a reductionist, non-dualist, pseudo-deterministic universe.

This may be easier to see with computer programs. Computers actually work with voltage differences and vast arrays of tiny semiconductors.  This is of approximately zero use in understanding a program like Eliza, or Deep Blue, or Facebook.  Actual programming is mostly done at the level of algorithms, with forays downward into code optimization and upward into abstract data structures.

What level do we describe human actions at?  We don’t know, and that’s the problem.  Again, I’ll guarantee you that it isn’t at the level of individual neurons– we have tens of billions of them; explaining the mind with neurons would be like explaining a computer program with semiconductors.

Of course, the subjective picture we sometimes have– that ‘I’ am a single thing, an agent– is wrong too.  We even recognize this in common speech, using metaphors of the mind as a congress of sub-personalities– Part of me wants pizza and part of me wants gyros; I’m torn about this proposal; his id is stronger than his superego; she’s dominated by greed.

With the computer, we can precisely identify and follow the algorithms.  With the brain, we only have vague guidance upward from neurology, and even vaguer (and highly suspect) notions downward from introspection.  We don’t know the right actors and agents that make up our minds; it’s quite premature to decide that we know or don’t know that we have “free will”.

For what it’s worth, my opinion is that our consciousness is pretty much what it seems like it is: an evolved brain function that is exposed to a wide range of brain inputs (internal and external) and uses them to make executive decisions.  This is something like Dennett’s view in Consciousness Explained.

Ironically, since computers are a favorite metaphor for philosophers, the brain is a pretty bad computer.  Brains neglected to evolve simple, generalizable, fast arithmetic and logic units like computers.  One purpose of consciousness might be to supply a replacement: language allows us to write algorithms to affect ourselves and those around us.

However, the real takeaway here should be to ask yourself, if you don’t believe in free will, what you think you’re missing.  All too often it turns out to be something we don’t really need: a dualistic Cartesian observer; an agent that acts with pure randomness; an agent whose behavior is determined by impossible replications of brain state; an agent that suffers no causation at all.