science


I’ve got another lukewarm recommendation for you!  I just finished Steven Pinker’s How the Mind Works. Pinker, like Daniel Dennett, doesn’t lack for ambition. He really wants to tell you how to design a functioning mind, or to be precise, how evolution has put ours together.

1671_the_brain_cube_inuse

His focus throughout is on evolution, so a basic restraint is that the components of the mind should have increased reproductive success. Not absolutely– we obviously use our brains in many ways that couldn’t be adaptations.  But it’s a good restraint to have, as it keeps him from accepting simplistic ideas that “intelligence is good” or that evolution is aiming at creating humanoids. (There’s a major caveat here, though: adaptation is only one process in evolution, and you have to make a case that it produced any particular feature. More on this later.)

Does he succeed?  In parts, brilliantly.  The chapter on vision is excellent. He explains exactly why vision is such a hard problem, and how the eyes and brain could put together a view of the world. Cognitive research is frustratingly indirect– we can’t really see how the software runs, so to speak. But we can put a whole bunch of clues together: how the eye works, what goes wrong when the brain is damaged, what constraints are suggested by optical illusions and glitches, how people respond to cleverly designed experiments.

As just one example, it seems that people can rotate visual images, as if they have a cheap, slow 3-D modeling program in their heads– and that this rotation takes time; certain tasks (like identifying whether two pictures depict the same object) take longer depending on the amount of rotation required. But even stranger, it’s found that people don’t just store one view of an object.  They can store several views, and solve rotation problems by rotating the nearest view. This is fascinating precisely because it’s not a solution that most programmers would think of. It makes sense for brains, which basically allow huge data stores but limited computational power.

He points out that vision is not only a difficult problem, it’s impossible.  If you see two lines at a diagonal in front of you, there is no way to determine for sure whether they’re really part of a triangle, or parallel lines moving away from you, or a random juxtaposition of two unrelated lines, and so on. The brain solves the impossible problem by making assumptions about the world– e.g. observed patches that move together belong to the same object; surfaces tend to have a uniform color; sudden transitions are probably object boundaries, and so on. It works pretty well out in nature, which is not trying to mislead us, but it’s easy to fool.  (E.g., it sure looks like there’s a hand holding a brain-patterned Rubik’s cube up there, doesn’t it? Surprise, it’s a flat computer screen!)

I also like his chapters on folk logic and emotions, largely because he defends both.  It’s easy to show that people aren’t good at book logic, but that’s in part because logicians insist on arguing in a way that’s far removed from primate life. A classic example involves the following query:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. What is the probability that Linda is a bank-teller? What is the probability that Linda is a bank-teller and is active in the feminist movement?

People often estimate that it’s more likely that Linda is a feminist bank teller, than that she’s simply a bank teller. This is wrong, by traditional logic: A ∧ B cannot be more probable than B. But all that really tells us is that our minds resist the nature of Boolean logic, which considers only the form of arguments, not their content. We love content. People’s judgments make narrative sense.  From the description of Linda, it’s clear that she’s a feminist, so a description that incorporates her feminism is more satisfying. In normal life it’s anomalous to include a bunch of information that’s irrelevant to your question.

As for emotions, it’s widely assumed that they’re atavistic, unnecessary, and positively dangerous– an AI is supposed to be emotionless, like Data. Pinker makes a good design case for emotions. In  brief, a cognitive being needs something to make it care about doing A rather than B… or doing anything at all. All the better if that something helps it avoid dangers, reproduce itself, form alliances, and detect cheaters.

So, why do I have mixed feelings about the book? A minor problem is breeziness— for instance, Pinker addresses George Lakoff’s category theory in one paragraph, and pretty spectacularly misses Lakoff’s argument. He talks about categories being “idealized”, as if Lakoff had overlooked this little point, rather than discussing it extensively. And he argues that the concept of “mother” is well defined in biology, which completely misses the distinction between ordinary language and technical terms. He similarly takes sides in the debate between Napoleon Chagnon and Marvin Harris, with a mere assertion that Chagnon is right. He rarely pauses to acknowledge that any of his examples are controversial or could be interpreted a different way.

More seriously, he doesn’t give a principled way to tell evolutionary from cultural explanations. This becomes a big problem in his chapter on sex, where he goes all in on evolutionary psychology. EP is fascinating stuff, no doubt about it, and I think a lot of it is true about animals in general. But which parts apply to humans is a much more contentious question.  (For a primer on problems with EP, see Amanda Schaffer’s article here, or P.Z. Myer’s takedown, or his direct attack on Pinker, or Kate Clancy’s methodological critique.) Our nearest ancestors are all over the map, sexually: gorillas have harems, chimpanzees have male dominance hierarchies with huge inter-chimp competition for mates (and multiple strategies); bonobos are notorious for female dominance and casual sex.  With such a menu of models, it’s all to easy to project sexist fantasies into the past. Plus, we know far less about our ancestors than we’d like, they lived in incredibly diverse environments, and evolution didn’t stop in 10,000 BC.

Plus there’s immense variety in human societies, which Pinker tends to paper over.  He often mentions a few favorite low-tech societies, but all too often he generalizes from 20C Americans.  E.g. he mentions perky breasts as a signal of female beauty… um, has he ever looked at an Asian girl, or a flapper, or a medieval painting?  Relatedly, Pinker convinces himself that men should be most attracted to a girl who is post-puberty but has never been pregnant, because she’s able to bear more children than an older woman. Kate Clancy’s takedown of hebephilia is relevant here: she points out that girls who bear children too early are likely to have less children overall, and that male chimps actually prefer females who have borne a child.

Finally, the last chapter, on art and religion, is just terrible. He actually begins the discussion of art with an attack on the supposed elitism of modern art. It’s like he’s totally forgotten that he’s supposed to be writing about evolution; what the hell does the supposed gulf between “Andrew Lloyd Webber [and] Mozart”, two Western artists separated by an evolutionary eyeblink, have to do with art in general? Couldn’t he at least have told us some anecdotes about Yanomamo art?

As for religion, he describes it as a “desperate measure… inventing ghosts and bribing them for good weather”. Srsly? Ironically, earlier in the book, he emphasizes several times that the facts about the ancestral environment are not normative, that we can criticize them ethically.  Then when it comes to religion he forgets that there’s such a thing as ethics; he just wants to make fun of the savages for their “inventions”. (I could go on all day about this blindness, but as just one point, the vast majority of believers, in any religion, invent nothing.  They accept what they’re told about the world, a strategy that is not exactly foreign to Pinker– why else does he supply a bibliography?)

On the whole, it’s probably a warning to be careful when you’re attempting a synthesis that wanders far outside your field. It might be best to skip the last two chapters, or just be resigned to a lot of eye-rolling.

I just finished Paleofantasy, by the biologist Marlene Zuk; it’s largely a response to notions that we made a wrong turn with agriculture and cities, and should head back to the savanna, or perhaps the trees.

Which was a mistake?

Which was a mistake?

The main objection is that attempts to come up with a “paleo” diet, or exercise regimen, or childrearing method, or sex roles, are generally bullshit: highly speculative at best, completely made up at worst. More specifically:

  • We aren’t cavemen, because we haven’t stopped evolving. Genetic changes like widespread lactose tolerance, or the adaptation of Tibetans to high altitudes, have occurred in historical times. Adaptation to disease happens even quicker. It’s just not true that the 10,000 years since the evolution of agriculture is too soon to adapt to our changed diet.
  • The idea that early hominins were perfectly adapted to their environment, with everyone who came after being disastrously out of place, is a misunderstanding of evolution. Evolution is not goal-directed, and animals are never perfect. They’re always a genetic mish-mash, just good enough to have survived, always subject to tradeoffs and sudden environmental changes.
  • We just don’t know exactly how early hominins ate and lived. The fossil record is scanty; our ape relatives live in very different ways; modern hunter-gatherers are themselves varied, and not necessarily representative of their ancestors.
  • One thing we do know: they lived in a wide variety of habitats and climates, from African savannas to Mediterranean shores to Ice Age caves. They didn’t all have the same diets, or tools, or cultures; there was no single paleo lifestyle.
  • Some of the specific ideas of paleo enthusiasts are almost certainly wrong. E.g. there’s good evidence that Neanderthals were grinding grain 30,000 years ago. A high-meat diet may only have become possible with the invention of ranged weapons, at about the same time. Some paleo fans claim that early hominins rarely ran; in fact one of the things humans are extremely good at is long-distance endurance running… we can run most animals down, including deer and horses.

(In case you didn’t get the memo, humans and their non-ape ancestors are now grouped together as hominins; the older term hominid now covers the chimps, bonobos, and orangutans as well.)

If you like a knock-out blow, Zuk rarely provides one– the usual problem isn’t that paleo fantasies are contradicted by science, but that they’re poorly supported.  However, Zuk reviews the wide range of evidence that’s becoming available, from DNA analysis to ongoing evolutionary studies to finding food traces in Neanderthal teeth.

Another recent read, Chip Walter’s Last Ape Standing, is even more of a buzzkill. He presents life on the savanna as difficult: scant resources and plenty of competition. Some human features such as neoteny may be an adaptation to bad times– our infants are born prematurely, with a rapidly expanding brain, and thus can more quickly adapt to new or changed conditions.  There’s also evidence that our species passed through a genetic bottleneck– compared with other species, we’re remarkably uniform, which could have happened if our total numbers dropped to 10,000 or so. The ancestral environment might not have been all that idyllic.

None of this, of course, means that you should stay on the couch, or eat loads of donuts and fries. We definitely have an unhealthy lifestyle; but the solution is to get more active, not to get more Australopithecine.

 

Daniel Dennett blisteringly reviewed Sam Harris’s Free Will, and that led to an interesting discussion at Mefi.

Does your theory of mind allow you to enjoy this pizza?

Does your theory of mind allow you to enjoy this pizza?

I read Dennett’s Elbow Room: The Varieties of Free Will Worth Wanting, which I found a pretty convincing takedown of the objections to free will.  Most of them are based on poor analogies:

  • To be unfree is normally to be under someone else’s control: you are a prisoner, or the subject of a dictator.  Obviously this is a good model if you are in fact a prisoner, but if not, not.  Whatever causes our actions, it isn’t another agent.
  • He talks about a type of wasp (Sphex spp.) which goes through an elaborate procedure to get prey insects to feed its young.  It’s pretty easy to mess with its little mind– e.g. after moving the insect into position, it inspects its nest.  If an experimenter moves the insect, the wasp will move it back– but this resets its program; it has to inspect its nest again.  You can keep playing this game indefinitely.  Dennett suggests that anti-free-will arguments are often aimed at “sphexishness”– we are not the smart adaptable agents we think we are.  Yet it’s clear that we’re far above the wasp’s level.
  • Or: you’re controlled by an inner computer program that will spit out the same results no matter what you do.  But you know, not all programs consist of one invariable line
    10 PRINT "HELLO WORLD"

    Programs can be highly sophisticated and highly responsive to the world.  It’s Searle’s old error.  Computers are dumb and deterministic; computer programs can be smart and unpredictable.

The way I’d put it is: if you want a pizza tonight, you can have one.  Well, something external might stop you– you’re out of money, you’re in a space station, your friends hate pizza.  But when nothing external stops you from having that pizza, you can totally have it.  That’s the only variety of free will you need.

Dennett is a “compatibilist”, meaning that he thinks determinism and free will are compatible.  I’m not, but only because determinism is wrong. For nearly a century, we’ve known that the world is non-deterministic; deal with it.  Try a two-slit experiment and predict where you’ll detect a given photon– it can’t be done.  There was a hope that “hidden variables” would restore determinism, but they don’t work either.  And “many worlds” don’t help either– the “many worlds” don’t let you predict where the photon will be detected.

This isn’t to say that I think free will is somehow saved by or depends on quantum randomness.  I don’t see why it would.  It just means that the problem people are worried about– that brain state X determines that mind state Y will happen– is not really there.  And it makes nonsense of hand-wringing about whether you could have done differently based on repeating that brain state.  Dennett argues that people are unnecessarily scrupulous about this question– all you need is the assurance that in similar brain states X’, X”, X”’, etc., some of them lead to pizza and some don’t.  But I think that since determinism is wrong, this way of looking at the problem is simply useless.

Now, for many people, the real point is that they think you’re unfree because something in your brain determines everything you do.  Something besides ‘you’, they mean.

In a sense, they’re completely right.  For instance: I wrote a novel!  Or did I?  Depends on what ‘I’ refers to.  It certainly wasn’t someone else; it came out my personal brain.  But if ‘I’ refers to my conscious mind– well, I feel like I wrote it, but most of it was put together, I know not how, by my subconscious.  I like David Eagleman’s metaphor of consciousness as a lousy CEO who habitually takes credit for his underlings’ accomplishments.

When you start looking at the brain, you start finding disturbing things.  E.g. if you ask people to move their arms at a moment of their own choice, the impulses to move the arm start as much as a second before the moment they tell you they decided to move it.  No wonder brain scientists, like Eagleman, tend to want to throw out free will, and often consciousness with it.

The problem I have with this position is that people are fatally vague over what kind of causation they’re talking about, and what level they want to describe actions at.  They seem to want to treat the mind as a physics problem.  It’s not a physics problem.  You will never explain your decision to order a pizza in terms of electrons and quarks.  Nor atoms and molecules.  Nor neurons and neurotransmitters (which I assume is what they mean by “brain states”).

Reductionism is basic to science, but it does not consist of explaining everything in terms of quantum mechanics.  A few things can be explained that way, but most things– evolution, plate tectonics, language, Keynesian economics, the fall of Rome– cannot.  These need to be explained at a higher level of abstraction, even in a reductionist, non-dualist, pseudo-deterministic universe.

This may be easier to see with computer programs. Computers actually work with voltage differences and vast arrays of tiny semiconductors.  This is of approximately zero use in understanding a program like Eliza, or Deep Blue, or Facebook.  Actual programming is mostly done at the level of algorithms, with forays downward into code optimization and upward into abstract data structures.

What level do we describe human actions at?  We don’t know, and that’s the problem.  Again, I’ll guarantee you that it isn’t at the level of individual neurons– we have tens of billions of them; explaining the mind with neurons would be like explaining a computer program with semiconductors.

Of course, the subjective picture we sometimes have– that ‘I’ am a single thing, an agent– is wrong too.  We even recognize this in common speech, using metaphors of the mind as a congress of sub-personalities– Part of me wants pizza and part of me wants gyros; I’m torn about this proposal; his id is stronger than his superego; she’s dominated by greed.

With the computer, we can precisely identify and follow the algorithms.  With the brain, we only have vague guidance upward from neurology, and even vaguer (and highly suspect) notions downward from introspection.  We don’t know the right actors and agents that make up our minds; it’s quite premature to decide that we know or don’t know that we have “free will”.

For what it’s worth, my opinion is that our consciousness is pretty much what it seems like it is: an evolved brain function that is exposed to a wide range of brain inputs (internal and external) and uses them to make executive decisions.  This is something like Dennett’s view in Consciousness Explained.

Ironically, since computers are a favorite metaphor for philosophers, the brain is a pretty bad computer.  Brains neglected to evolve simple, generalizable, fast arithmetic and logic units like computers.  One purpose of consciousness might be to supply a replacement: language allows us to write algorithms to affect ourselves and those around us.

However, the real takeaway here should be to ask yourself, if you don’t believe in free will, what you think you’re missing.  All too often it turns out to be something we don’t really need: a dualistic Cartesian observer; an agent that acts with pure randomness; an agent whose behavior is determined by impossible replications of brain state; an agent that suffers no causation at all.

A couple good books I’ve read lately:

Incognito: The Secret Lives of the Brain, by David Eagleman.  The first third of the book is the best; it’s a demolition of the idea that we run our brains.  That is, there’s this thing we call us, the conscious mind, and like a bad manager, it takes credit for its underlings’ hard work.   This is not a novelty in philosophy, but Eagleman is a neuroscientist, so his examples of how the conscious mind isn’t in control are based in neurology and psychology, and they’re fascinating.

One of his examples: you know how to change lanes, correct?  Can you explain it, as a short sequence of instructions for a smart (and English-speaking) robot?  Give it a try.

Most people say something like “Turn the wheel right; when you’ve moved over, straighten it out.”  If the robot tried that, it would steer off the road.  The thing is, after turning the wheel right, you have to turn the wheel an equal amount left in order to get back to your original direction.  Your brain knows this, but you probably don’t.  Any skilled behavior like this has been shuffled off to unconscious routines which manage all the details (and far more fluidly than the conscious mind could do them).

After this he reviews some theories of mind; he like Minsky’s Society of Mind, but extends it to include a multitude of competing sub-units– what he calls a “team of rivals”.  Another of his metaphors is an electoral system.  This broadly makes sense, though I think Eagleman overestimates how revolutionary it is: it’s an updated version of the theory of mind put forth in medieval allegories.

Then he gets into issues of responsibility, including legal responsibility.  We used to blame the person for everything; now we think that some things, like mental illness, are ‘not the person’s fault’.  He suggests that we go all the way and just admit that nothing is anyone’s fault. This doesn’t mean that we don’t punish anyone; it means that we take a scientific view of what it takes to prevent bad behavior from recurring.  This last part of the book is the least convincing, as by now he’s gone far beyond our actual knowledge.

The Secrets of Alchemy, by Lawrence M. Principe.   This is a history of alchemy, from its origins in Hellenistic Egypt, through the Arab period, and then to medieval and Renaissance Europe.  I read a lot about alchemy while researching substances— the history of alchemy is basically the history of chemistry.  And it’s fun stuff, especially for the beautiful names– orpiment, realgar, the Green Lion, calx of lead, spirit of hartshorn…

Alchemy has a bad rap because, of course, the alchemists were mostly pursuing an impossibility: the transmutation of metals by chemical methods.  Principe answers the obvious question– why didn’t they notice it was impossible?– by analyzing their methods, their principles, and their idea of authority.  Briefly:

  • with (by modern standards) inconstant heating methods and no good tests for purity, it was hard to replicate results and thus easy to think that someone else had done better
  • the best physical theories, going back to the ancients, said that metals were compounds
  • people claimed to have succeeded, and the whole medieval mindset was to trust written sources attributed to known experts.

So the alchemists thought they had good evidence, and their critics (and there were many) had the same limitations, and couldn’t actually disprove the claims.  (There was a lot of fraud, to the point that alchemists in literature are almost always comic figures.)

The most interesting bits are where Principe digs out the retorts and Bunsen burners and attempts to follow old recipes.  His conclusion is that the old alchemists were often careful observers– though they were wont to disguise their knowledge as what sounded like insane mystical ramblings:

Take the ravenous grey wolf that on account of his name is subjected to bellicose Mars, but by birth is a child of old Saturn, and that lives in the valleys and mountains of the world and is possessed of great hunger.  Throw the king’s body before him that he may have his nourishment from it. And when he was devoured the king, then make a great fire and throw the wolf into it so that he burns up entirely; thus will the king be redeemed.

That’s some instructions by Basil Valentine, from 1602.  Principe explains that this is a real experiment: the king is gold; the wolf is melted stibnite, or antimony ore.  A 14-karat gold ring is 58% gold, 42% copper.  Throw it in melted stibnite and it dissolves. The copper turns into a sulfide, while the gold and antimony meld together and sink to the bottom, where they can be easily retrieved.  Roast this mixture and the antimony evaporates, leaving you with pure gold.  So this is an obfuscated but correct recipe for purifying gold.

Why did the alchemists write this way?  Well, they didn’t always; there are examples of very straightforward books.   But it’s clear that the writers were masters of PR.  You didn’t want to give all your secrets away; and if your early steps could be puzzled out, it added authority to the more fanciful steps describing the creation of the Philosopher’s Stone.  Principe describes and reproduces a few quite striking experiments– not transmutation, of course, but chemical tricks that could wow a rich patron.

In the 1900s, a lot of this mystical-sounding obfuscation was reinterpreted as actual mysticism– that is, it was taken as a spiritual rather than a chemical process.  This was a wrong turn; much better to think of alchemy as early chemistry, with a commendable interest in hands-on experimentation.

Principe obviously loves this stuff, and probably makes a few too many excuses for the alchemists.  It’s true that it’s not edifying to simply make fun of early thinkers for bad theories or poor methods.  One Arab alchemist, for instance, had the excellent idea of quantifying the notion of how much of the four humors were active in a substance– there were 28 degrees of hot, cold, wet, and dry.  So far so good, but how did he assign the degrees– some kind of crude measurement?  No, he took the Arabic name of the substance, letter by letter, and applied numerological rules to derive the degree.  Principe carefully explains that this is not as silly as it sounds– it was in accordance with the best Islamic thought, in which Arabic was God’s language, and could be expected to match aspects of God’s creation.  Well, that is an interesting glimpse into an earlier worldview, and you might want to incorporate things like that into your conworld.  But, well, that line of thought was ultimately sterile, and alchemy was not really medieval thought at its best.

PZ Myers has a posting where he makes a short argument against transhumanist uploading.  This was relevant to my interests, because I think uploading is bonkers.

He has two arguments, really.  Unfortunately one (using entropy) is just wrong: entropy doesn’t prevent complex systems; it only requires that more entropy be generated to offset them. So long as you convert only a tiny fraction of the universe into computronium, entropy won’t stand in your way.

His other argument was better. but sketchy: uploaders prefer “what is good for the individual over what is good for the population”.  As he was arguing with Eliezer Yudkowsky among others, this is probably a misfire– judging from his Harry Potter fanfic, Yudkowsky does consider it an imperative that technology benefits everyone.

Still, there’s the germ of an actual good argument in there: that the uploaders think way too much about personally not dying, and way not enough about how to make what life we have worth living.  Morally, it’s hard to argue that our biggest problem is that people don’t live 1000 or 1,000,000 years.  If humans keep on with the sort of behavior and morality and economics they have right now, such lifetimes would be hellish.  Even if you have a wildly optimistic view of how well we’re doing, prolonging lifetimes even to a couple hundred years would be horrible for 90% of the population, and that’s assuming we can even keep our civilization going.  (If you want to live forever, climate change is not your grandchildren’s problem, it’s yours.)  So even if you want immortality, you’d better prioritize, well, almost everything else.

But that’s a discussion for another day.  I was caught up short by this comment, by one Gregory in Seattle:

There is a growing belief among memory researchers that the brain relies on “archetypes.” You actually have only one or two physical memories of the taste of bacon: all of the apparent memories of bacon link back to them. REM sleep is when the brain recompiles, tossing out actual memories from short-term storage and integrating the day’s experiences into long-term storage with heavy object reuse (pardon the computerese.)

According to this model, children learn faster because they have fewer archetypes: they are building a “library” and links into them are pretty straightforward. As we get older, though, the ability to store and link novel information becomes more difficult and memory begins to ossify. Someone who pursues life-long learning can stave this off, but not completely. To use another computer example, the problem does not appear to be one of storage so much as the storage becoming fragmented. The ability to link begins to suffer, and memories begin to get lost in the shuffle.

Without a major redesign of how the brain stores memories, very long lifespans will probably bring us to a point where novel experiences cannot be integrated at all. We see this sort of slow down in people who are 90 and 100; I cannot imagine what it would be like for someone who is 200, much less 500 or 1000.

I’d never heard about this theory, but then I don’t know anything really about memory research.  But it’s a fascinating idea, and one that makes a lot of sense as a way for a creature of limited brain to organize the reams of sensory data that swamp it daily.

Though it’s not so much an argument against long lives as an argument that if we want to have them, we’ll have to change some basic facts about ourselves.  That’s why, in the Incatena, I have people doing a kind of brain reboot every century or two: throw out a bunch of memories, loosen the connections, re-adolescentize the brain.

To put it another way, your basic personality, attitudes, ideology, politics, etc. are generally pretty well firmed up by the time you’re 30.  You can adapt to new things after that, but with increasing difficulty– by the time you’re 80, you’re a curmudgeon who hates the kids’ music and clothing and votes for reactionaries.  That’s acceptable when lifetimes are 90 years, but not if they’re 900.  If you refuse to die, then you have to do something to regain your adaptability, for your own benefit and for that of society.

I picked up Elaine Morgan’s The Descent of the Child and devoured it in an evening.  I liked it a lot, and Morgan is very readable, and yet I have to throw on a steaming pile of caveats.

That’s because she’s a promoter of the Aquatic Ape Hypothesis– the idea that humans went through an aquatic or near-aquatic stage that accounts for their many differences from the other apes, such as hairlessness, a descended larynx, their thick layer of subcutaneous fat, and their early birth.  It’s a fascinating theory which turns out to be highly problematic.  Hairlessness, for instance, doesn’t correlate nicely with aquatic habitat; think of otters or polar bears (who are excellent swimmers).  Humans don’t have characteristics sea animals generally do have, such as very small ears.  Worse yet, a lot of the supposed facts of AAH supporters turn out to be just wrong– e.g. that non-aquatic mammals can’t hold their breath, that human infants are unusual in having a swimming reflex, or that our layer of subcutaneous fat is attached to the skin rather than the underlying tissue.

There’s only one chapter in the new book about the AAH, but when someone has a tendency to misquote the scientific literature, you have to mistrust what they say even on other topics.

The Descent of the Child is about babies and children.  Morgan goes over the biology of reproduction, gestation, birth, and childrearing, with a focus on where we are the same and where we differ from the other primates.  It’s a fascinating story, full of interesting facts.  For instance, we live at a much slower pace than would be expected for a mammal our size.  E.g. compared to chimpanzees, the age of puberty and our life expectancy are doubled.  Gestation proceeds at a leisurely pace, too, fitting nicely to a developmental schedule that should see the baby in the womb for 18 months.  Halfway through, the baby is evicted, resulting in an unusually inert and helpless newborn.

Her larger point is that in seeking to explain human features, scientists too often concentrate on adults only.  But the whole life-cycle is subject to evolutionary pressure, and things like the human baby’s helplessness are serious puzzles… isn’t it dangerous to have offspring that vulnerable?

At the same time, one of hte hallmarks of humans, compared with the other apes, is neoteny.  Even as adults, we are much more like ape children than we are like ape adults– in appearance, in bipedality, in general playfulness.

This touches on linguistics; Morgan suggests that it was more likely to be children than adults who originated the first language, much as the best ape language learner was the bonobo Kanzi, who picked up a keyboard-based language by watching researchings attempting to teach it to his much denser mother.

Anyway, fun book, just double-check any facts she gives before recycling them in conworlds or at cocktail parties.

Alert reader Alon Levy pointed me to one of Chris Wayan’s revamped Earths.  They’re really a lot of fun, and essential reading for a conworlder.

Seapole

Each world starts with some simple concept, and then its geology and climate are worked out in detail. For instance:

  • Seapole (pictured above), with new axes chosen to put the poles in open ocean
  • Shiveria, with new axes that put both poles on land (producing a permanent ice age)
  • Dubia, Earth after a thousand years of global warming
  • Inversia, with land and sea reversed
  • Jaredia, another axis reboot, designed to create as many east-west continents as possible (as Jared Diamond recommends for advancing civilization)
  • Extremely large or small planets

It looks like he actually constructs these things and paints them, rather than just modelling them on the computer.

(I should perhaps note, the rest of Wayan’s site is devoted to retelling dreams, with pictures, and it’s… eccentric.  The worldbuilding is fascinating though.)

Next Page »