Parade o’ books

Any of these books deserves a full review, with neat facts plucked from the pages to entice you– but at this point, that would require a lot of re-reading. So a quick survey will have to do.

Emily Willingham, Phallacy: Life Lessons from the Animal Penis (2020). Yep, a book about the penis in all its forms in the animal kingdom. Willingham has a serious point here: researchers and outsiders often import archaic attitudes into biology, getting the penis wrong and forgetting the vagina. But it’s also both educational and entertaining to simply look at the weird stuff animals get up to. A good place to start is trying to figure out what is a penis and what isn’t… there are some wacky edge cases, such as at least one invertebrate which inserts its eggs into the male with a copulatory organ. Or there’s the spiders which lose their penises when they copulate. It’s not that bad: they have two.

This is one of a number of books by women that offer a lighthearted critique of misguided male scientists, who are often eager to push an idea of aggressive promiscuous males and picky, passive females. Oh, there is so much more variation than that. Others in this genre include Olivia Judson’s Dr. Tatiana’s Sex Advice to All Creation, Meredith Small’s What’s Love Got to Do with It?, and Natalie Angier’s Woman: An Intimate Geography. Nature is weird, and does not inherently support alt-right prejudices.

Benjamin Brose, Xuanzang: China’s Legendary Pilgrim and Translator (2021). If you read my China Construction Kit, you’ll remember Xuanzang, the Chinese Buddhist monk who took and arduous trip to India in the 600s to understand Buddhism better, coming back 16 years later with hundreds of precious manuscripts. This story is the key to the classic Chinese novel, Journey to the West. But the real story behind it is just as interesting, though perhaps it’s disappointing to learn that only the first couple weeks of the journey were perilous, as he set off alone. As soon as he reached the first stop, he met the local king, who received him graciously and sent him on to the next local ruler, and so on for years. Brose explains what Xuanzang wanted to know and how he affected Buddhism, and includes several narrative passages from the man himself.

Andrew Gordon, A Modern History of Japan from Tokugawa times to the present (2003). I read this because I thought I could borrow some modern Japanese history for Almea, and I did. The book covers nearly 500 years, which allows quite a lot of detail but not exactly depth– e.g. WWII is covered in just one chapter. The chapters on the Meiji period are the most interesting. I was most interested to understand how Japan could modernize when China didn’t (until Deng).

The Meiji ‘restoration’ was more or less a top-down revolution: two of the most advanced daimyo (nobles) took over militarily. Or more broadly, the revolution empowered two classes that were near but, crucially, not at the top: the samurai, and the nouveau-riche rural elite, who had worked their way up from peasants to craftsmen to notables in the last century or so. (A peculiarity of Japan was that the prosperous bourgeois class in the 1800s was not in the big cities but in small rural towns.) And in Japan, that was enough to get things going; whereas in China merely getting rid of the Manchu did not give power to any more modern or modernizing class.

Another fascinating tidbit: Japan’s 1889 constitution, which lasted till the end of WWII, produced a lot more democracy than its writers expected or wanted. The winners of the revolution really only wanted to stay on as the new rulers. They made sure that the new Diet did not control the army, or even really the ministries. They also limited suffrage, in hopes that the members would be well-off and conservative. They only allowed the Diet at all because people were already writing constitutions and hoping for democracy, and they thought they’d better get their own version out fast. But the very existence of the Diet, and national propaganda for building the nation, encouraged national debate, expectations that the Diet would matter, and expectations that the Japanese people should all benefit from modernization. The constitution allowed the elite to govern without the Diet, but in practice (and until the 1930s) power was essentially shared between the army, the bureaucrats, and the parties.

Paul Lockhart, Firepower: How weapons shaped warfare (2021). If your conworld gets at all beyond the medieval period, you should read this or something like it. It’s about guns, including their big brothers artillery and cannons. I’m still in the middle of it, but one of the main takeaways is that like most technology, it’s a matter of small but constant improvements– and ongoing challenges. E.g. I knew that rifling was important: if you cut a spiral groove in the barrel of a gun and make bullets engage it, they get a spin that makes them far more accurate and deadly. This was known from the 15th century, so why didn’t it take over till the 1800s? Well, because firing a gun (especially with black powder) produces residues that clog the interior. You can’t fire too many shots before the balls don’t fit– unlike a musket which has more leeway. Good rifles had to wait till the ball was replaced with the bullet, and rifles had mechanisms to deform the bullet to force it into the rifling. Another example: breech loading is far more efficient than ramming shot in through the barrel. This too was known early on, but didn’t entirely take over till the late 1800s. Here too there were just many little technical problems to overcome: early breech loaders had a tendency to blow up, or leak hot gases.

Another takeaway: any old empire could afford muskets and cannons. But as the technology developed, only great powers could afford the newest guns– and they had to acquire them (and in enormous quantities) at any cost, because falling behind in the arms race was devastating. When explosive shells were developed that set wooden ships on fire– well, everyone had to shift to ironclads if they could. It’s no coincidence that nearly-free nobles were subjugated to kings, and smaller states became the prey of great powers. Even in the 1800s, the hot new tech might only last for a couple of decades.

A People’s History of Science

I just read this book, by Clifford D. Conner. Er, is it clear that the title of the book is the title of the post? If not, it’s called A People’s History of Science. Glad we could clear that up.

Anyway, the thesis of the book is that science, both theoretical and practical, though it owes much to the various geniuses everyone emphasizes, also owes much to a usually unknown army of craftsmen, assistants, and ordinary people.

I have mixed feelings about the book. Not because he doesn’t prove his thesis– he does, and there’s a lot to learn here whether you’re interested in the history of science/technology, or in conworlding. But he can’t resist polemic, and those parts are tedious.

So, when he sticks to his subject, it’s a great book. He has fascinating sections on the Polynesian navigators, on knowledge of plants from around the world, on the practical knowledge of miners, instrument makers, craftsmen, and midwives. It’s full of things I didn’t know, such as:

  • Portugal’s Prince Henry is famous for encouraging navigation; what’s less known is that his captains would kidnap Africans and learn the local sea routes and trade opportunities from them.
  • Similarly, when American colonists wanted to grow rice, which requires deep knowledge of wet-field cultivation, they stole the expertise, by buying African slaves who knew how to do it. Conner find newspaper ads from the 1700s which touted the availability of slaves who had “knowledge of rice culture.”
  • The famous Dutch microscopist van Leeuwenhoek was not a professional scientist but a draper. He was originally interested in lensmaking in order to get a close-up view of his fabrics.
  • The man who won the British contest for a way to accurately determine longitude was John Harrison, a carpenter who had taught himself watchmaking.
  • The invention of printing was followed by an explosion of practical manuals, written by and for craftsmen. Smart savants read these books, or talked to craftsmen.
  • We tend to think of painters and architects as an elevated class– Artists– but traditionally they were considered barely-respectable craftsmen. Botticelli, Leonardo da Vinci, Brunelleschi were all apprenticed to goldsmiths, and Vasari called Michelangelo as “the wisest of all the craftsmen.”
  • Benjamin Franklin published the first chart of the Gulf Stream, which could shave two weeks off the trip across the Atlantic; he himself acknowledges that its was dictated to him by a whaleboat captain, who was his cousin.
  • One 19C account of the steam engine comments, “There is no machine or mechanism in which the little the theorists have done is more useless. It arose, was improved and perfected by working mechanics– and by them only.”

Conner quotes plenty of old-fashioned histories which exalt solitary thinkers and theorists, from Aristotle to Aquinas to Newton to Einstein, trumpet the ancient Greeks as if they invented inquiry and theory, and explicitly downplay craftsmen and practical workers. These ideas are easily demolished by quoting the very Greeks and Renaissance savants they extol, who praise (though they don’t often name) the practical workers their work depended on. The Greeks themselves tell us that they got much of their knowledge from Egypt and Mesopotamia.

The polemic sections, as I said, get tedious. The last few chapters in particular are a slog, as Conner mostly forgets his subject and indulges in a slapdash tour of modern capitalism and its disasters (though for balance he also condemns Stalin). A typical bit is the invocation of the Bhopal gas leak; his one-paragraph discussion has nothing to do with his main thesis and tells us nothing new.

A minor but annoying cavil: being anti-establishment in so many ways sometimes leads Conner into a defense of quack ideas. E.g. there’s a sympathetic discussion of Mesmer’s “animal magnetism.” He approvingly quotes a contemporary who accepted that Mesmer could cure “blindness, deafness, wounds, or local paralysis”, and Conner suggests that the dismissal of Mesmer by the savants was a “monumental missed opportunity for the… advancement of science.” Now, the history of science is largely the history of wrong ideas, and wrong theories are a necessary step toward better theories; but just because “the authorities” condemn a particular set of ideas doesn’t mean that those ideas need rehabilitation. Conner seems to like Mesmer because he railed against the Academy, but describing Mesmerism as “people’s science” is a stretch– as Conner himself notes, Mesmer was supported by a rich banker, and did his best to appeal to high society.

Finally, though there is some recognition of Polynesians, Chinese, and Babylonians here, the book as a whole is extremely Western-oriented. As a nonfiction writer myself, I very much understand the problem of research load. But though he insists on the debt the Greeks owed to Babylonia and invokes Joseph Needham, the Babylonians don’t rate a chapter, nor do the Chinese or Arabs.

Pinker and the Brain

I’ve got another lukewarm recommendation for you!  I just finished Steven Pinker’s How the Mind Works. Pinker, like Daniel Dennett, doesn’t lack for ambition. He really wants to tell you how to design a functioning mind, or to be precise, how evolution has put ours together.


His focus throughout is on evolution, so a basic restraint is that the components of the mind should have increased reproductive success. Not absolutely– we obviously use our brains in many ways that couldn’t be adaptations.  But it’s a good restraint to have, as it keeps him from accepting simplistic ideas that “intelligence is good” or that evolution is aiming at creating humanoids. (There’s a major caveat here, though: adaptation is only one process in evolution, and you have to make a case that it produced any particular feature. More on this later.)

Does he succeed?  In parts, brilliantly.  The chapter on vision is excellent. He explains exactly why vision is such a hard problem, and how the eyes and brain could put together a view of the world. Cognitive research is frustratingly indirect– we can’t really see how the software runs, so to speak. But we can put a whole bunch of clues together: how the eye works, what goes wrong when the brain is damaged, what constraints are suggested by optical illusions and glitches, how people respond to cleverly designed experiments.

As just one example, it seems that people can rotate visual images, as if they have a cheap, slow 3-D modeling program in their heads– and that this rotation takes time; certain tasks (like identifying whether two pictures depict the same object) take longer depending on the amount of rotation required. But even stranger, it’s found that people don’t just store one view of an object.  They can store several views, and solve rotation problems by rotating the nearest view. This is fascinating precisely because it’s not a solution that most programmers would think of. It makes sense for brains, which basically allow huge data stores but limited computational power.

He points out that vision is not only a difficult problem, it’s impossible.  If you see two lines at a diagonal in front of you, there is no way to determine for sure whether they’re really part of a triangle, or parallel lines moving away from you, or a random juxtaposition of two unrelated lines, and so on. The brain solves the impossible problem by making assumptions about the world– e.g. observed patches that move together belong to the same object; surfaces tend to have a uniform color; sudden transitions are probably object boundaries, and so on. It works pretty well out in nature, which is not trying to mislead us, but it’s easy to fool.  (E.g., it sure looks like there’s a hand holding a brain-patterned Rubik’s cube up there, doesn’t it? Surprise, it’s a flat computer screen!)

I also like his chapters on folk logic and emotions, largely because he defends both.  It’s easy to show that people aren’t good at book logic, but that’s in part because logicians insist on arguing in a way that’s far removed from primate life. A classic example involves the following query:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. What is the probability that Linda is a bank-teller? What is the probability that Linda is a bank-teller and is active in the feminist movement?

People often estimate that it’s more likely that Linda is a feminist bank teller, than that she’s simply a bank teller. This is wrong, by traditional logic: A ∧ B cannot be more probable than B. But all that really tells us is that our minds resist the nature of Boolean logic, which considers only the form of arguments, not their content. We love content. People’s judgments make narrative sense.  From the description of Linda, it’s clear that she’s a feminist, so a description that incorporates her feminism is more satisfying. In normal life it’s anomalous to include a bunch of information that’s irrelevant to your question.

As for emotions, it’s widely assumed that they’re atavistic, unnecessary, and positively dangerous– an AI is supposed to be emotionless, like Data. Pinker makes a good design case for emotions. In  brief, a cognitive being needs something to make it care about doing A rather than B… or doing anything at all. All the better if that something helps it avoid dangers, reproduce itself, form alliances, and detect cheaters.

So, why do I have mixed feelings about the book? A minor problem is breeziness— for instance, Pinker addresses George Lakoff’s category theory in one paragraph, and pretty spectacularly misses Lakoff’s argument. He talks about categories being “idealized”, as if Lakoff had overlooked this little point, rather than discussing it extensively. And he argues that the concept of “mother” is well defined in biology, which completely misses the distinction between ordinary language and technical terms. He similarly takes sides in the debate between Napoleon Chagnon and Marvin Harris, with a mere assertion that Chagnon is right. He rarely pauses to acknowledge that any of his examples are controversial or could be interpreted a different way.

More seriously, he doesn’t give a principled way to tell evolutionary from cultural explanations. This becomes a big problem in his chapter on sex, where he goes all in on evolutionary psychology. EP is fascinating stuff, no doubt about it, and I think a lot of it is true about animals in general. But which parts apply to humans is a much more contentious question.  (For a primer on problems with EP, see Amanda Schaffer’s article here, or P.Z. Myer’s takedown, or his direct attack on Pinker, or Kate Clancy’s methodological critique.) Our nearest ancestors are all over the map, sexually: gorillas have harems, chimpanzees have male dominance hierarchies with huge inter-chimp competition for mates (and multiple strategies); bonobos are notorious for female dominance and casual sex.  With such a menu of models, it’s all to easy to project sexist fantasies into the past. Plus, we know far less about our ancestors than we’d like, they lived in incredibly diverse environments, and evolution didn’t stop in 10,000 BC.

Plus there’s immense variety in human societies, which Pinker tends to paper over.  He often mentions a few favorite low-tech societies, but all too often he generalizes from 20C Americans.  E.g. he mentions perky breasts as a signal of female beauty… um, has he ever looked at an Asian girl, or a flapper, or a medieval painting?  Relatedly, Pinker convinces himself that men should be most attracted to a girl who is post-puberty but has never been pregnant, because she’s able to bear more children than an older woman. Kate Clancy’s takedown of hebephilia is relevant here: she points out that girls who bear children too early are likely to have less children overall, and that male chimps actually prefer females who have borne a child.

Finally, the last chapter, on art and religion, is just terrible. He actually begins the discussion of art with an attack on the supposed elitism of modern art. It’s like he’s totally forgotten that he’s supposed to be writing about evolution; what the hell does the supposed gulf between “Andrew Lloyd Webber [and] Mozart”, two Western artists separated by an evolutionary eyeblink, have to do with art in general? Couldn’t he at least have told us some anecdotes about Yanomamo art?

As for religion, he describes it as a “desperate measure… inventing ghosts and bribing them for good weather”. Srsly? Ironically, earlier in the book, he emphasizes several times that the facts about the ancestral environment are not normative, that we can criticize them ethically.  Then when it comes to religion he forgets that there’s such a thing as ethics; he just wants to make fun of the savages for their “inventions”. (I could go on all day about this blindness, but as just one point, the vast majority of believers, in any religion, invent nothing.  They accept what they’re told about the world, a strategy that is not exactly foreign to Pinker– why else does he supply a bibliography?)

On the whole, it’s probably a warning to be careful when you’re attempting a synthesis that wanders far outside your field. It might be best to skip the last two chapters, or just be resigned to a lot of eye-rolling.


I just finished Paleofantasy, by the biologist Marlene Zuk; it’s largely a response to notions that we made a wrong turn with agriculture and cities, and should head back to the savanna, or perhaps the trees.

Which was a mistake?
Which was a mistake?

The main objection is that attempts to come up with a “paleo” diet, or exercise regimen, or childrearing method, or sex roles, are generally bullshit: highly speculative at best, completely made up at worst. More specifically:

  • We aren’t cavemen, because we haven’t stopped evolving. Genetic changes like widespread lactose tolerance, or the adaptation of Tibetans to high altitudes, have occurred in historical times. Adaptation to disease happens even quicker. It’s just not true that the 10,000 years since the evolution of agriculture is too soon to adapt to our changed diet.
  • The idea that early hominins were perfectly adapted to their environment, with everyone who came after being disastrously out of place, is a misunderstanding of evolution. Evolution is not goal-directed, and animals are never perfect. They’re always a genetic mish-mash, just good enough to have survived, always subject to tradeoffs and sudden environmental changes.
  • We just don’t know exactly how early hominins ate and lived. The fossil record is scanty; our ape relatives live in very different ways; modern hunter-gatherers are themselves varied, and not necessarily representative of their ancestors.
  • One thing we do know: they lived in a wide variety of habitats and climates, from African savannas to Mediterranean shores to Ice Age caves. They didn’t all have the same diets, or tools, or cultures; there was no single paleo lifestyle.
  • Some of the specific ideas of paleo enthusiasts are almost certainly wrong. E.g. there’s good evidence that Neanderthals were grinding grain 30,000 years ago. A high-meat diet may only have become possible with the invention of ranged weapons, at about the same time. Some paleo fans claim that early hominins rarely ran; in fact one of the things humans are extremely good at is long-distance endurance running… we can run most animals down, including deer and horses.

(In case you didn’t get the memo, humans and their non-ape ancestors are now grouped together as hominins; the older term hominid now covers the chimps, bonobos, and orangutans as well.)

If you like a knock-out blow, Zuk rarely provides one– the usual problem isn’t that paleo fantasies are contradicted by science, but that they’re poorly supported.  However, Zuk reviews the wide range of evidence that’s becoming available, from DNA analysis to ongoing evolutionary studies to finding food traces in Neanderthal teeth.

Another recent read, Chip Walter’s Last Ape Standing, is even more of a buzzkill. He presents life on the savanna as difficult: scant resources and plenty of competition. Some human features such as neoteny may be an adaptation to bad times– our infants are born prematurely, with a rapidly expanding brain, and thus can more quickly adapt to new or changed conditions.  There’s also evidence that our species passed through a genetic bottleneck– compared with other species, we’re remarkably uniform, which could have happened if our total numbers dropped to 10,000 or so. The ancestral environment might not have been all that idyllic.

None of this, of course, means that you should stay on the couch, or eat loads of donuts and fries. We definitely have an unhealthy lifestyle; but the solution is to get more active, not to get more Australopithecine.


Free will and its discontents

Daniel Dennett blisteringly reviewed Sam Harris’s Free Will, and that led to an interesting discussion at Mefi.

Does your theory of mind allow you to enjoy this pizza?
Does your theory of mind allow you to enjoy this pizza?

I read Dennett’s Elbow Room: The Varieties of Free Will Worth Wanting, which I found a pretty convincing takedown of the objections to free will.  Most of them are based on poor analogies:

  • To be unfree is normally to be under someone else’s control: you are a prisoner, or the subject of a dictator.  Obviously this is a good model if you are in fact a prisoner, but if not, not.  Whatever causes our actions, it isn’t another agent.
  • He talks about a type of wasp (Sphex spp.) which goes through an elaborate procedure to get prey insects to feed its young.  It’s pretty easy to mess with its little mind– e.g. after moving the insect into position, it inspects its nest.  If an experimenter moves the insect, the wasp will move it back– but this resets its program; it has to inspect its nest again.  You can keep playing this game indefinitely.  Dennett suggests that anti-free-will arguments are often aimed at “sphexishness”– we are not the smart adaptable agents we think we are.  Yet it’s clear that we’re far above the wasp’s level.
  • Or: you’re controlled by an inner computer program that will spit out the same results no matter what you do.  But you know, not all programs consist of one invariable line

    Programs can be highly sophisticated and highly responsive to the world.  It’s Searle’s old error.  Computers are dumb and deterministic; computer programs can be smart and unpredictable.

The way I’d put it is: if you want a pizza tonight, you can have one.  Well, something external might stop you– you’re out of money, you’re in a space station, your friends hate pizza.  But when nothing external stops you from having that pizza, you can totally have it.  That’s the only variety of free will you need.

Dennett is a “compatibilist”, meaning that he thinks determinism and free will are compatible.  I’m not, but only because determinism is wrong. For nearly a century, we’ve known that the world is non-deterministic; deal with it.  Try a two-slit experiment and predict where you’ll detect a given photon– it can’t be done.  There was a hope that “hidden variables” would restore determinism, but they don’t work either.  And “many worlds” don’t help either– the “many worlds” don’t let you predict where the photon will be detected.

This isn’t to say that I think free will is somehow saved by or depends on quantum randomness.  I don’t see why it would.  It just means that the problem people are worried about– that brain state X determines that mind state Y will happen– is not really there.  And it makes nonsense of hand-wringing about whether you could have done differently based on repeating that brain state.  Dennett argues that people are unnecessarily scrupulous about this question– all you need is the assurance that in similar brain states X’, X”, X”’, etc., some of them lead to pizza and some don’t.  But I think that since determinism is wrong, this way of looking at the problem is simply useless.

Now, for many people, the real point is that they think you’re unfree because something in your brain determines everything you do.  Something besides ‘you’, they mean.

In a sense, they’re completely right.  For instance: I wrote a novel!  Or did I?  Depends on what ‘I’ refers to.  It certainly wasn’t someone else; it came out my personal brain.  But if ‘I’ refers to my conscious mind– well, I feel like I wrote it, but most of it was put together, I know not how, by my subconscious.  I like David Eagleman’s metaphor of consciousness as a lousy CEO who habitually takes credit for his underlings’ accomplishments.

When you start looking at the brain, you start finding disturbing things.  E.g. if you ask people to move their arms at a moment of their own choice, the impulses to move the arm start as much as a second before the moment they tell you they decided to move it.  No wonder brain scientists, like Eagleman, tend to want to throw out free will, and often consciousness with it.

The problem I have with this position is that people are fatally vague over what kind of causation they’re talking about, and what level they want to describe actions at.  They seem to want to treat the mind as a physics problem.  It’s not a physics problem.  You will never explain your decision to order a pizza in terms of electrons and quarks.  Nor atoms and molecules.  Nor neurons and neurotransmitters (which I assume is what they mean by “brain states”).

Reductionism is basic to science, but it does not consist of explaining everything in terms of quantum mechanics.  A few things can be explained that way, but most things– evolution, plate tectonics, language, Keynesian economics, the fall of Rome– cannot.  These need to be explained at a higher level of abstraction, even in a reductionist, non-dualist, pseudo-deterministic universe.

This may be easier to see with computer programs. Computers actually work with voltage differences and vast arrays of tiny semiconductors.  This is of approximately zero use in understanding a program like Eliza, or Deep Blue, or Facebook.  Actual programming is mostly done at the level of algorithms, with forays downward into code optimization and upward into abstract data structures.

What level do we describe human actions at?  We don’t know, and that’s the problem.  Again, I’ll guarantee you that it isn’t at the level of individual neurons– we have tens of billions of them; explaining the mind with neurons would be like explaining a computer program with semiconductors.

Of course, the subjective picture we sometimes have– that ‘I’ am a single thing, an agent– is wrong too.  We even recognize this in common speech, using metaphors of the mind as a congress of sub-personalities– Part of me wants pizza and part of me wants gyros; I’m torn about this proposal; his id is stronger than his superego; she’s dominated by greed.

With the computer, we can precisely identify and follow the algorithms.  With the brain, we only have vague guidance upward from neurology, and even vaguer (and highly suspect) notions downward from introspection.  We don’t know the right actors and agents that make up our minds; it’s quite premature to decide that we know or don’t know that we have “free will”.

For what it’s worth, my opinion is that our consciousness is pretty much what it seems like it is: an evolved brain function that is exposed to a wide range of brain inputs (internal and external) and uses them to make executive decisions.  This is something like Dennett’s view in Consciousness Explained.

Ironically, since computers are a favorite metaphor for philosophers, the brain is a pretty bad computer.  Brains neglected to evolve simple, generalizable, fast arithmetic and logic units like computers.  One purpose of consciousness might be to supply a replacement: language allows us to write algorithms to affect ourselves and those around us.

However, the real takeaway here should be to ask yourself, if you don’t believe in free will, what you think you’re missing.  All too often it turns out to be something we don’t really need: a dualistic Cartesian observer; an agent that acts with pure randomness; an agent whose behavior is determined by impossible replications of brain state; an agent that suffers no causation at all.

Incognito and Alchemy

A couple good books I’ve read lately:

Incognito: The Secret Lives of the Brain, by David Eagleman.  The first third of the book is the best; it’s a demolition of the idea that we run our brains.  That is, there’s this thing we call us, the conscious mind, and like a bad manager, it takes credit for its underlings’ hard work.   This is not a novelty in philosophy, but Eagleman is a neuroscientist, so his examples of how the conscious mind isn’t in control are based in neurology and psychology, and they’re fascinating.

One of his examples: you know how to change lanes, correct?  Can you explain it, as a short sequence of instructions for a smart (and English-speaking) robot?  Give it a try.

Most people say something like “Turn the wheel right; when you’ve moved over, straighten it out.”  If the robot tried that, it would steer off the road.  The thing is, after turning the wheel right, you have to turn the wheel an equal amount left in order to get back to your original direction.  Your brain knows this, but you probably don’t.  Any skilled behavior like this has been shuffled off to unconscious routines which manage all the details (and far more fluidly than the conscious mind could do them).

After this he reviews some theories of mind; he like Minsky’s Society of Mind, but extends it to include a multitude of competing sub-units– what he calls a “team of rivals”.  Another of his metaphors is an electoral system.  This broadly makes sense, though I think Eagleman overestimates how revolutionary it is: it’s an updated version of the theory of mind put forth in medieval allegories.

Then he gets into issues of responsibility, including legal responsibility.  We used to blame the person for everything; now we think that some things, like mental illness, are ‘not the person’s fault’.  He suggests that we go all the way and just admit that nothing is anyone’s fault. This doesn’t mean that we don’t punish anyone; it means that we take a scientific view of what it takes to prevent bad behavior from recurring.  This last part of the book is the least convincing, as by now he’s gone far beyond our actual knowledge.

The Secrets of Alchemy, by Lawrence M. Principe.   This is a history of alchemy, from its origins in Hellenistic Egypt, through the Arab period, and then to medieval and Renaissance Europe.  I read a lot about alchemy while researching substances— the history of alchemy is basically the history of chemistry.  And it’s fun stuff, especially for the beautiful names– orpiment, realgar, the Green Lion, calx of lead, spirit of hartshorn…

Alchemy has a bad rap because, of course, the alchemists were mostly pursuing an impossibility: the transmutation of metals by chemical methods.  Principe answers the obvious question– why didn’t they notice it was impossible?– by analyzing their methods, their principles, and their idea of authority.  Briefly:

  • with (by modern standards) inconstant heating methods and no good tests for purity, it was hard to replicate results and thus easy to think that someone else had done better
  • the best physical theories, going back to the ancients, said that metals were compounds
  • people claimed to have succeeded, and the whole medieval mindset was to trust written sources attributed to known experts.

So the alchemists thought they had good evidence, and their critics (and there were many) had the same limitations, and couldn’t actually disprove the claims.  (There was a lot of fraud, to the point that alchemists in literature are almost always comic figures.)

The most interesting bits are where Principe digs out the retorts and Bunsen burners and attempts to follow old recipes.  His conclusion is that the old alchemists were often careful observers– though they were wont to disguise their knowledge as what sounded like insane mystical ramblings:

Take the ravenous grey wolf that on account of his name is subjected to bellicose Mars, but by birth is a child of old Saturn, and that lives in the valleys and mountains of the world and is possessed of great hunger.  Throw the king’s body before him that he may have his nourishment from it. And when he was devoured the king, then make a great fire and throw the wolf into it so that he burns up entirely; thus will the king be redeemed.

That’s some instructions by Basil Valentine, from 1602.  Principe explains that this is a real experiment: the king is gold; the wolf is melted stibnite, or antimony ore.  A 14-karat gold ring is 58% gold, 42% copper.  Throw it in melted stibnite and it dissolves. The copper turns into a sulfide, while the gold and antimony meld together and sink to the bottom, where they can be easily retrieved.  Roast this mixture and the antimony evaporates, leaving you with pure gold.  So this is an obfuscated but correct recipe for purifying gold.

Why did the alchemists write this way?  Well, they didn’t always; there are examples of very straightforward books.   But it’s clear that the writers were masters of PR.  You didn’t want to give all your secrets away; and if your early steps could be puzzled out, it added authority to the more fanciful steps describing the creation of the Philosopher’s Stone.  Principe describes and reproduces a few quite striking experiments– not transmutation, of course, but chemical tricks that could wow a rich patron.

In the 1900s, a lot of this mystical-sounding obfuscation was reinterpreted as actual mysticism– that is, it was taken as a spiritual rather than a chemical process.  This was a wrong turn; much better to think of alchemy as early chemistry, with a commendable interest in hands-on experimentation.

Principe obviously loves this stuff, and probably makes a few too many excuses for the alchemists.  It’s true that it’s not edifying to simply make fun of early thinkers for bad theories or poor methods.  One Arab alchemist, for instance, had the excellent idea of quantifying the notion of how much of the four humors were active in a substance– there were 28 degrees of hot, cold, wet, and dry.  So far so good, but how did he assign the degrees– some kind of crude measurement?  No, he took the Arabic name of the substance, letter by letter, and applied numerological rules to derive the degree.  Principe carefully explains that this is not as silly as it sounds– it was in accordance with the best Islamic thought, in which Arabic was God’s language, and could be expected to match aspects of God’s creation.  Well, that is an interesting glimpse into an earlier worldview, and you might want to incorporate things like that into your conworld.  But, well, that line of thought was ultimately sterile, and alchemy was not really medieval thought at its best.

Are memories stored just once?

PZ Myers has a posting where he makes a short argument against transhumanist uploading.  This was relevant to my interests, because I think uploading is bonkers.

He has two arguments, really.  Unfortunately one (using entropy) is just wrong: entropy doesn’t prevent complex systems; it only requires that more entropy be generated to offset them. So long as you convert only a tiny fraction of the universe into computronium, entropy won’t stand in your way.

His other argument was better. but sketchy: uploaders prefer “what is good for the individual over what is good for the population”.  As he was arguing with Eliezer Yudkowsky among others, this is probably a misfire– judging from his Harry Potter fanfic, Yudkowsky does consider it an imperative that technology benefits everyone.

Still, there’s the germ of an actual good argument in there: that the uploaders think way too much about personally not dying, and way not enough about how to make what life we have worth living.  Morally, it’s hard to argue that our biggest problem is that people don’t live 1000 or 1,000,000 years.  If humans keep on with the sort of behavior and morality and economics they have right now, such lifetimes would be hellish.  Even if you have a wildly optimistic view of how well we’re doing, prolonging lifetimes even to a couple hundred years would be horrible for 90% of the population, and that’s assuming we can even keep our civilization going.  (If you want to live forever, climate change is not your grandchildren’s problem, it’s yours.)  So even if you want immortality, you’d better prioritize, well, almost everything else.

But that’s a discussion for another day.  I was caught up short by this comment, by one Gregory in Seattle:

There is a growing belief among memory researchers that the brain relies on “archetypes.” You actually have only one or two physical memories of the taste of bacon: all of the apparent memories of bacon link back to them. REM sleep is when the brain recompiles, tossing out actual memories from short-term storage and integrating the day’s experiences into long-term storage with heavy object reuse (pardon the computerese.)

According to this model, children learn faster because they have fewer archetypes: they are building a “library” and links into them are pretty straightforward. As we get older, though, the ability to store and link novel information becomes more difficult and memory begins to ossify. Someone who pursues life-long learning can stave this off, but not completely. To use another computer example, the problem does not appear to be one of storage so much as the storage becoming fragmented. The ability to link begins to suffer, and memories begin to get lost in the shuffle.

Without a major redesign of how the brain stores memories, very long lifespans will probably bring us to a point where novel experiences cannot be integrated at all. We see this sort of slow down in people who are 90 and 100; I cannot imagine what it would be like for someone who is 200, much less 500 or 1000.

I’d never heard about this theory, but then I don’t know anything really about memory research.  But it’s a fascinating idea, and one that makes a lot of sense as a way for a creature of limited brain to organize the reams of sensory data that swamp it daily.

Though it’s not so much an argument against long lives as an argument that if we want to have them, we’ll have to change some basic facts about ourselves.  That’s why, in the Incatena, I have people doing a kind of brain reboot every century or two: throw out a bunch of memories, loosen the connections, re-adolescentize the brain.

To put it another way, your basic personality, attitudes, ideology, politics, etc. are generally pretty well firmed up by the time you’re 30.  You can adapt to new things after that, but with increasing difficulty– by the time you’re 80, you’re a curmudgeon who hates the kids’ music and clothing and votes for reactionaries.  That’s acceptable when lifetimes are 90 years, but not if they’re 900.  If you refuse to die, then you have to do something to regain your adaptability, for your own benefit and for that of society.

The Descent of the Child

I picked up Elaine Morgan’s The Descent of the Child and devoured it in an evening.  I liked it a lot, and Morgan is very readable, and yet I have to throw on a steaming pile of caveats.

That’s because she’s a promoter of the Aquatic Ape Hypothesis– the idea that humans went through an aquatic or near-aquatic stage that accounts for their many differences from the other apes, such as hairlessness, a descended larynx, their thick layer of subcutaneous fat, and their early birth.  It’s a fascinating theory which turns out to be highly problematic.  Hairlessness, for instance, doesn’t correlate nicely with aquatic habitat; think of otters or polar bears (who are excellent swimmers).  Humans don’t have characteristics sea animals generally do have, such as very small ears.  Worse yet, a lot of the supposed facts of AAH supporters turn out to be just wrong– e.g. that non-aquatic mammals can’t hold their breath, that human infants are unusual in having a swimming reflex, or that our layer of subcutaneous fat is attached to the skin rather than the underlying tissue.

There’s only one chapter in the new book about the AAH, but when someone has a tendency to misquote the scientific literature, you have to mistrust what they say even on other topics.

The Descent of the Child is about babies and children.  Morgan goes over the biology of reproduction, gestation, birth, and childrearing, with a focus on where we are the same and where we differ from the other primates.  It’s a fascinating story, full of interesting facts.  For instance, we live at a much slower pace than would be expected for a mammal our size.  E.g. compared to chimpanzees, the age of puberty and our life expectancy are doubled.  Gestation proceeds at a leisurely pace, too, fitting nicely to a developmental schedule that should see the baby in the womb for 18 months.  Halfway through, the baby is evicted, resulting in an unusually inert and helpless newborn.

Her larger point is that in seeking to explain human features, scientists too often concentrate on adults only.  But the whole life-cycle is subject to evolutionary pressure, and things like the human baby’s helplessness are serious puzzles… isn’t it dangerous to have offspring that vulnerable?

At the same time, one of hte hallmarks of humans, compared with the other apes, is neoteny.  Even as adults, we are much more like ape children than we are like ape adults– in appearance, in bipedality, in general playfulness.

This touches on linguistics; Morgan suggests that it was more likely to be children than adults who originated the first language, much as the best ape language learner was the bonobo Kanzi, who picked up a keyboard-based language by watching researchings attempting to teach it to his much denser mother.

Anyway, fun book, just double-check any facts she gives before recycling them in conworlds or at cocktail parties.

Chris Wayan’s worlds

Alert reader Alon Levy pointed me to one of Chris Wayan’s revamped Earths.  They’re really a lot of fun, and essential reading for a conworlder.


Each world starts with some simple concept, and then its geology and climate are worked out in detail. For instance:

  • Seapole (pictured above), with new axes chosen to put the poles in open ocean
  • Shiveria, with new axes that put both poles on land (producing a permanent ice age)
  • Dubia, Earth after a thousand years of global warming
  • Inversia, with land and sea reversed
  • Jaredia, another axis reboot, designed to create as many east-west continents as possible (as Jared Diamond recommends for advancing civilization)
  • Extremely large or small planets

It looks like he actually constructs these things and paints them, rather than just modelling them on the computer.

(I should perhaps note, the rest of Wayan’s site is devoted to retelling dreams, with pictures, and it’s… eccentric.  The worldbuilding is fascinating though.)


Charlie Stross recommended this article on myths of female sexuality (by Susan Krauss Whitbourne, reporting on a study by Terri Conley).  It’s quite interesting, and I’d really like to believe its conclusions, but as mythbusting it’s a bust.  Let’s go over the list.

1. “Women value men with powerful status, and men value women who are both youthful and attractive.”  Against this, Conley cites one speed dating scenario. One experiment.  Probably less than 30 participants; certainly less than a hundred.  Contrary evidence: pretty much all of human behavior.  Or if you want something more quantified, check out these awesome stats from OKCupid, based on a sample of 200,000 people.

2. “Women want and actually have fewer sexual partners. Conley and team reviewing relevant studies found that yes, some men do want a large number of sexual partners.”  That is, the first part of the ‘myth’ (about wanting) wasn’t busted, but confirmed.  The twist is that at least one study found that men exaggerate how many conquests they’ve had.  Surely this shouldn’t be a big surprise.  Mathematically, if men report n het encounters, women should report n as well.  But even this finding reinforces that men and women don’t think the same.

3. “Men think about sex more often than women do.” The busting consists of confirming the finding, but adding that men think about food and sleep more, too.

4. Women orgasm less.  The busting: “When in committed relationships, women and men experience orgasm with equal frequency.”  In other words, the ‘myth’ is true!  If you have a generalization that applies to a whole group, it’s not disproved by showing that the generalization doesn’t hold for a fraction of the group.

5. “Women don’t like casual sex as much as men do.”  The classic demonstration was a rather silly experiment where college students were approached with offers of sex— 70% of men were interested, 0% of women.  I call this silly because it’s a completely unnatural setup— this isn’t how people find partners!  Conley did a variation which found that the women were much more interested “if they believe that they can avoid being stigmatized”.  Again, that’s a pretty important nuance!

6. “Women are choosier than men.”  Conley apparently found that whichever sex initiates contact, the other will be choosier— that is, if men approach women, the women seem pickier; if women approach men, the men seem pickier.  This one is hard to evaluate without knowing the exact methodology; it seems like a no-brainer that any offer has a chance of being rejected, so I don’t see how this is a test of choosiness at all.

Whitbourne frames the story in the context of people showing surprise that women are interested in sex and male bodies.  Surely that hasn’t been hot news since about 1925?  (She mentions that e.g. Hollywood loves to show female but not male bodies, but I don’t think this is due to filmmakers calculating that women aren’t interested; it’s more that they think men will be turned off.)

The takeaway here, I think, is to be careful about evidence— especially for findings that confirm what you already believe.  When you read “Studies show…”, be at least as wary as when you read “with this weird old tip”.  Look at how the study was done, how many people it involved, and whether the methodology really tests the hypothesis.

(Also, yeah, I know, it’s Psychology Today.  That’s why I mention that Stross plugged the link— he’s a smart guy, so it seemed worth checking out.)