There’s been a lot of worry lately that robots will take all of our jobs. Should you be worried? Should you try to make friends with the robots so they treat you nicely?


This would be bad

Now, there’s a lot to say here, so here’s the tl;dr: no, this is only moderately worrisome. What you should worry about instead are:

Worries about automation go back to the beginning of the industrial revolution, two hundred years ago. But, with some major caveats, automation is good!  After 200 years,

  •  Life for the majority of people is far better. Before automation, 90% of the people lived by subsistence agriculture, one bad harvest or pestilence or war away from death. And those scourges came almost constantly.
  • Americans, as usual, focus on bad things in America, and don’t realize that these are boom times for most of the world. Global poverty is way down; it’s never been a better time to be Indian or Chinese.
  • Despite all the worries about machines taking our jobs— they haven’t. US unemployment is currently under 5%—  which is about as low as it’s gotten in my lifetime.
  • In general, pre-automation jobs sucked. There’s a tendency to romanticize lost jobs, but you really do not want to be a cotton picker, or a miner, or a laundrywoman, or a data entry typist.

The thing is, at any point in the last 200 years, an alarmist could concoct a tale of machine devastation. With modern farming techniques we don’t need 90% of the population to work on farms. Omigod that means 90% of the population will lose their jobs!  Only, this didn’t happen. Only 1.4% of the US population works on the farm today; the rest of the 90% found other jobs.

Now, the major caveat: this process sometimes goes smoothly, but sometimes is hella disruptive. It’s not pleasant when a middle-aged person has to change careers, whether it’s an 1800s agricultural worker, or a 1980s steelworker. Whole regions can be devastated and not know how to pick themselves up.

Jane Jacobs had a lot to say about what happens when the process goes well, and when it doesn’t. She calls the successful places city regions; as the name implies, these are always near big cities. In brief, this is the belt round a city where automation produces new opportunities as fast as it erodes old jobs. In a city region, there is new work to do, and it doesn’t take a lot of intervention for people to find it. (The books on India I recently read are also good introductions to this process. Poor people are amazingly entrepreneurial when they get the chance.)

You can’t count on everyone to live in a city region, but you can manage the disruption in other ways. This is where you need a strong economic safety net. You want people to be able to change jobs.  It’s not a huge exaggeration to say that the New Deal succeeded because it cushioned the disruption of industrialization. Stimulus spending spurred production and job creation; Social Security allowed people to move to where the jobs were without abandoning their old folks; unemployment insurance kept people going between jobs; the GI Bill trained people for new occupations. Europe went farther, with universal healthcare and free university education.

(Do you want a universal basic income?  Go for it, so long as you’re not actually looking to reduce government benefits. But it’s a good idea on its own; there’s no need to drag the robots into it.)

OK, but aren’t the robots different this time?  They can drive cars now! They can take your order at McDonalds! Surely all the jobs will disappear!

The first thing I’d point out is, extrapolation is a crappy guide to the future. In 1890 you could predict that the cities of the future would be buried ten feet deep in horse manure. This didn’t happen.

Second, universal AI is a huge assumption. If you look at sf and pop-sci articles, humanoid robots are ten years away, and have been for a hundred years. The first robot story appear, Karl Čapek’s RUR, appeared in 1920. Basically, intelligence is a pretty hard problem, and researchers always underestimate it. It’s easy to feel (as I did when I was an undergraduate) that a pretty good AI would be just a few semesters of work. Well, it isn’t, or it’d be done by now!

Also, I spent years as a programmer, so I know just how stupid computers are. They are great tools, mind you! But I don’t think we should scare ourselves about their abilities, at least not yet.

The better question is, what sort of jobs can computers or robots do? The general answer: jobs that are

  • repetitive and predictable
  • expensive

Automation is not, er, automatic. It takes analysis, programming, and testing, and someone has to pay for all that. That’s why a repetitive assembly-line task, done by a high-pay union worker, is the first candidate for automation.  It’s barely worth it to replace a waiter (especially since they can be hired for far less than minimum wage).

(Driving is a weird case. I think AI driving is far less advanced than it seems. As in much of programming, you can cover 90% of the work of the program and still only be 10% done.  The unexpected or difficult cases take most of the effort.)

Let’s put it the other way. What jobs are probably safe from automation in this century? Some of these, I’d wager:

  • teacher
  • physician
  • nurse
  • CEO
  • programmer
  • athlete
  • writer
  • comics artist
  • prostitute
  • craft brewer
  • video game designer
  • marketing & sales
  • legislator
  • soldier
  • actor
  • day care worker
  • hair stylist
  • product designer
  • scientist
  • thug
  • organic produce farmer
  • architect
  • call center operator
  • plumber
  • robot designer
  • robot mechanic
  • robot debugger
  • cook
  • valet
  • monk/nun
  • preacher
  • personal trainer
  • psychologist
  • web designer
  • lawyer
  • burglar
  • drug dealer
  • cop
  • spammer
  • SEO farm writer
  • AI researcher
  • anti-AI pundit

Many of these jobs, though not all, involve what humans are best at: dealing with humans. I don’t think anyone cares that their cotton be hand-picked. I think it’ll be a long time before there’s a robot you would entrust your one-year-old to all day.

I have a friend who’s an architect. I’d say his work is at least half talking to clients, and managing building projects— i.e., managing other people (contractors and inspectors). There’s that human thing again. For making the actual plans, he already uses a computer. He can already produce a plan almost as fast as he can come up with an idea.

So the better question is not “Could a robot entirely do this job?” but “What could a computer-assisted person do in this job?” Lawyers, for instance, are often still stuck in the world of paper. Automation would allow them to take on more cases. (For good or for evil.)

I’ve purposely included some “bad jobs” on the list, because the point isn’t that “things will be fine.” But I’ll get back to the grim meathook future below.

I haven’t tried to anticipate what the new jobs of 2100 might be, but we can expect that there will be plenty of entirely new things. Over 200 years, we’ve moved from an agriculture economy, to a manufacturing economy, to a service economy.  I’ve suggested before that what’s next is a frivolity economy.

Another point that I think worriers-about-robots miss: Robots and programs cost money. As one datapoint: Bitcoin mining presently consumes as much energy as the entire nation of Tunisia.

Plus, if you’re really pessimistic about the uses of humans— then the cost of hiring a human will plummet. Humans can be raised quite cheaply, without the use of high-cost metals and rare earths, and they’re really pretty versatile.

I’ve written before about why humanoid robots are a dumb idea. I realize that many people really want them, but I’d answer that they only think they want them. You do not actually want a sentient android to be your sex worker, household cleaner, or driver, precisely because a sentient android can do what it wants, not what you want. Maybe you want a robot you can talk to— but speech is a terrible medium for giving technical instructions.

We’re way too influenced here by science fiction. We grew up thinking of the Jetsons’ robot maid, or C3PO. In fact, a bulky robot maid holding 19th century tools in her 21st century manipulators is awfully poor design. Consider all the household automation we already have: dishwashers, microwaves, vacuums, washing machines. Not a single one of them is humanoid, not a single one does its tasks as a human would. Honestly, automation of the house is almost done, compared to the year 1900. But if you want more, a better model would be the room-cleaning bots seen in The Fifth Element.

Here’s another way to think about the whole situation.  Again, 90% of the population used to be engaged in subsistence agriculture. That basically means that the entire population can do what the 10% did before. Or to put it another way, there are 325 million Americans. One way to explain our economy to someone from 1800 is that we’re as rich as a country of 3.25 billion people would have been in their time.

If we continue to automate predictable high-repetition tasks, maybe another 90% of current jobs disappear.  But the population will live like today’s 10% do. Their standard of living will be far higher, and their jobs on the whole more interesting than today’s. (Of course we’re writing sf at this point, so you’d might as well look at my attempt at an sf future.)

That doesn’t mean we won’t have a grim meathook future. Piketty has warned that our future might look like… the 19th century, when most income and wealth went to a tiny class— and not even a class of innovators and entrepreneurs, but a useless rentier elite. And of course right now as I write, a clown car of reactionaries is trying to take away tens of millions of people’s health care, while the clown-in-chief is demonizing trans men and women in uniform.

But that’s the thing: grim meathook future is a political choice. Automation is just a form of productivity increase— and productivity gains do not have to go entirely to the rich. They used to help out everyone.  Around 1980, American voters decided to stop helping out everyone, and help out only the top 10%.

If that continues, the future will be grim, robots or no.  But it’s not the nature of automation that is the threat. It’s whether we manage it under plutocracy or not.



With a few hundred thousand other people, i’ve been mesmerized by Jon Bois’s 17776.  It’s over here.  Take an hour and go through it all.


Avid football fan

Now, I am one of the few American males who does not get football. Never really mastered the rules, and nothing about it makes me want to. But I love Jon Bois. He has a series called Breaking Madden that’s hilarious. He takes a football sim (that would be Madden), forces it to do insane things, and tells the results as a story. Sometimes the game cooperates, sometimes it glitches out, it’s all good.

The elevator pitch for 17776 is “What football will look like in the future.” And he gets there! But 17776 is so much bigger and weirder than that. It’s a science fiction story. It’s a multimedia experience. It’s about sentient space probes.  It’s about human beings.  It’s a utopia— a bittersweet one.  It’s about friendship and God and in a couple of places it’s really moving.

First, the football.  No, wait, that won’t make sense without the basic situation. His method is to insert one wild hypothetical, and draw out its implications with no further magic. The hypothetical is this: in 2026, for no reason ever explained, people stop aging and dying (and being born).  That’s it.  Everyone finds themselves immortal. What do they do?

For one thing, they play football. For 15,000 years.  The rulebook gets really long and strange over that time. Bois invents half a dozen or more weird versions of football. The least weird of these is the first one he gets to: the playing field is the state of Nebraska; the end zones are Iowa and Wyoming. There are thousands of players at any one time, but only one ball, and the game lasts for years.

We’re introduced to this game, by the way, because the protagonists are watching it. They’re space probes— two Pioneer units, and a Jupiter probe that in 2017 hasn’t launched yet. One of the units— Pioneer 9— is woken up at the beginning of the story, which gives us a character who has to learn about all this world just as we do.

The story is mostly text conversations, but it plays with the medium expertly.  There are pictures, found documents real and imagined, GIFs and videos. Many of these use Google Earth to bounce over the globe, zooming effortlessly from outer space down to individual houses or football stadiums. (I’m inclined to say: don’t try this at home. Bois makes it work, but I really don’t want every story to be told this way.)

Bois has an interesting take on utopias / the future.  In his scenario, the people of 17776 are the same people who were alive in 2026. And for the most part, their society is ours, only perfected: nanobots keep people from injury and want; war and capitalism are gone. His take is that people will try the fancier visions of sf writers— flying cars, robots, etc.— but ultimately get rid of them because they don’t like them. People want to have jobs and walk around and cook and hold elections and hang out with their pals, to say nothing of playing and watching football. Plus, they’re 2026ers at heart and they stick with what they know.

Granted, his approach may only make sense in the narrow scenario he’s created. But there’s a lot of wisdom in his take. Other writers have seriously considered what people would do with near-immortality— Julian Barnes and Jorge Luis Borges, for instance. Bois is by far the most optimistic of them. Barnes and Borges concluded that most people would get bored within a thousand years; Bois thinks the human sense of play is enough to keep us going indefinitely.  (My own sf future envisions more change, but also doubts that getting too far from our primate heritage is a great idea.)

17776 is full of novelty and pure fun, but what makes it unforgettable is Bois’s heart. There’s all sorts of grimness and outrage these days; we don’t always get this full blast of benignity. Bois seems to just like people. There’s no real villains here— except maybe for a few cheap moves in some of the football games. And it’s hard not to surrender to this future of Nice But Not Amazing.

As you may know, they made another Star Wars movie.  It’s called The Force Awakens.


(Helpful links to my rewatch of the original trilogy: one, two, three.)

Like pretty much everyone else, my reaction is “Whew, they made a good Star Wars movie.” SW is supposed to be heroic, spectacular, and just a bit cheesy, and that’s just what they achieved.

I’ve seen a lot of people saying that it’s kind of a remake of the first film. I’d say that’s true of the last half of the film— the whole Starkiller thing. The first half, with the introduction of Finn and Rey, feels more original. And even if it is a reboot, it’s a very sure-footed one. The acting, the fighting, and the spectacle are really better than the original.

I think Harrison Ford really sells the movie. There’s an art to delivering prime movie cheese. If you don’t accept it, it turns into camp, and if you’re too earnest, it seems laughable in a different way. Ford gets the balance exactly right. He makes his age work for the movie: he’s a tired, tough old rogue, and yet he never upstages the newcomers, but gently welcomes them into the series.

(It’s a narrative danger for a movie or book series or comic to fall in love with itself. You  assume that the audience adores your characters, and you start to treat them portentously, have secondary characters do everything for them, and you forget to actually make them still likable. Danger averted here: Han earns his hero status all over again for this movie.)

The weakest part of the movie is also the riskiest move: Kylo Ren, the emergent emo Sith Lord. I like that he isn’t Darth Vader; he’s young and a little naive, and makes mistakes. That’s a far more interesting Dark Lord to work with. He does better than (shudder) Hayden Christopher, and yet it’s a little hard to take him seriously when he takes off his helmet. He doesn’t do much to show why the Dark Side attracts him, or maybe the script just doesn’t let him do so.

I like the character of  Finn.  Was this someone’s elevator pitch? “Let’s see a Stormtrooper start to question his role.” Not a bad idea at all. I almost wish the elevator guy had convinced Abrams to make that the whole movie, because learning how the Stormtroopers operate and what the human beings inside the plastic suits are like would have been interesting. (They can’t all be motivated by fear, can they?  What do they do off duty? Are there a bunch of gung-ho Trumpists who just love Stormtrooping? Is Finn the only one with doubts?)

Rey is great, and I think she fits in with my contention that women make better video game protagonists. We feel what we see, so a stoic space marine lessens what we feel— if he doesn’t seem to care about what’s happening, why should we? Rey reacts viscerally to everything she goes through.

Her character arc is small here: basically from “wants to go home” to “wants to help out”. She doesn’t have to learn to be a hero; she’s heroic throughout. That’s a major difference from A New Hope, in fact, though in part it’s just that Finn gets the role of “guy out of his depth who has to step up”.  In a sense the usual transformation is applied to the audience instead: we expect an untrained nobody, we suspect Finn is going to be the New Luke, and we keep getting shown that Rey is the more competent one.

It’s tempting to say that a character needs more flaws and setbacks, but that isn’t always the case. Plenty of popular characters are pretty much always heroic. Besides, they’ll probably throw a lot of bad shit at her in Episode 8.

When I think about the story it feels a little contrived, or to put it another way, it’s a little too convenient that people always end up just where they need to be for the next bit of plot. But I didn’t really care about that while watching, and it probably wouldn’t have added much to paper over the contrivances— it’d just lengthen the movie for no great narrative gain. (Example: the raid on Maz Kanata’s planet, which conveniently takes place just after the plot points have been covered. It wouldn’t have been hard to, say, make it a week later. But it works emotionally to have everything happen almost in real time.)

(A bigger hole, I think: they destroyed the capital of the New  Republic, right? Everyone is awfully blasé about that; they react much more to the death of one guy, albeit an important one. The one thing the movie doesn’t sell is the size of the galaxy. Compare the war in Consider Phlebas, which destroyed 90 million ships, 14,000 orbitals, and 53 planets. In many ways Star Wars feels like it has about a hundred planets total.)

OK, onto the traditional notes I took while watching…

  • The opening crawl is less amazeballs in 2015.
  • I’ve never quite got why everyone understands Droid but the audience.
  • That huge ship would make a great video game level. But really, all she could find in it to salvage is a double handful of parts?
  • Is it a good idea to steal that droid from the guy who’s stealing it?
  • Finn sometimes overplays the nebbishness.
  • I’ve played so many Bethesda games that I would’ve kept the armor to sell it.
  • Darth wouldn’t’ve trashed his own terminal, he’d’ve trashed the underling.
  • This whole section of the film doesn’t feel  like A New Hope at all.
  • “We shall see”— you may be Sith, Lord Snopes, but you could be a little more supportive.
  • OK, that’s a cantina scene.
  • No, Rey, never go down into a dungeon alone!
  • It’s taking these dudes a long time to commit to the cause. C’mon, folks, we know you got nothing to go back to.
  • Why is this big galactic laser beam visible from entirely different star systems? The galaxy never really feels like it’s the size of a galaxy.
  • Does anyone in-universe ever wonder why random people have a different accent?
  • Leia’s first appearance gives off a strong Hillary vibe.
  • My sufferance for C3PO hasn’t improved.
  • Hard not to look at Kylo Ren and think of Reaper.
  • Outsmarted, Kylo!  Try not to destroy your terminal again.
  • OK, Bigger Death Star, this is looking like a reboot.
  • The Falcon does pretty well with all this knocking into the scenery.
  • Everything is always so overbuilt in this universe. Wouldn’t plain drywall have been cheaper?
  • Here’s how well I’d expect a soldier trained with a laser rifle to do with a lightsabre: poorly. So from that point of view Finn is doing well.
  • “How fast is the weapon charging?” “At the speed of plot, sir.”
  • How does a planet collapse?  Was it full of air bladders?
  • The Starkiller episode kind of violates Mamet’s tenet of plot. Rather than repeatedly trying something and failing, the Rebels— sorry, the Resistance— come up with a plan and it works as planned on the first try.
  • BB8 didn’t get to go along on the final quest?
  • Pretty long denouement for an action movie.
  • How was that map made?  Did many Gungans die for it?  Also, why did no one not recognize a huge frigging section of the Galaxy?  People live in the Galaxy, they will know its shape.  It’s like getting a map of Europe and saying “I have no idea how this fits on the globe!”

Join me in about two years for Episode 8!


The recent wave of terrorist attacks has made me worry if technology will ever, or during this century, advance to the point where regular terrorists are able to destroy the world. Humanity has, so far, survived 71 years when it was possible to blow up the world if you had the resources of a superpower. But what if technology advances further to the point where destroying the world gets within the means of your average, run-of-the-mill doomsday cult? Or even a deranged individual like Ted Kaczynski?

Related to this, I think if we would really live in a world like that of James Bond movies or superhero comics, with supervillains regularly trying to destroy the world, the world wouldn’t survive for long: in order for the world to survive, the James Bonds/superheroes would have to win every single time, while in order for the world to be destroyed, the villains would only have to win once. And eventually that one time would come- if you keep rolling the dice, sooner or later they will come up six.


Man, with Britain voting to screw itself, Turkey going full dictatorship, and Trump promoting fascism here, to say nothing of humans slowly roasting the ecosphere, you don’t have enough to worry about?

For what it’s worth, if the world gets blown up, it’s still more likely to be a superpower that does it. Or at least a medium-sized state. This isn’t meant as a reassurance; it’s a reminder that we’ve escaped from nuclear holocaust by the skin of our teeth several times.  Here’s a Mefi page on near misses.

For non-state actors, a weak consolation is that though they are careless about human life, they are rarely self-genocidal. That is, there’s a rough rationality to extremism: atrocities are cheap and get attention, but the extremists do not actually want their enemies to destroy them all, because of course then their cause is dead. Of course, like any other politicians, extremists can misjudge likely results. Osama bin Laden probably didn’t plan on getting killed in a raid.

It’s always worthwhile to get some historical perspective. Here’s a chart of terrorist deaths over 40 years:


That is, outside of three countries (two of which are basically in civil war), terrorism is down worldwide. (Also, for comparison, the annual number of road traffic deaths is 1.25 million.)  Nothing to be complacent about, but we can too easily get the impression from the news that everything is terrible and always getting worse.

If you’re thinking of futuristic threats, it’s also worth remembering that people will have a strong motivation to develop futuristic counters. It’s not great worldbuilding (or prediction) to suppose that some agents get doomsday-in-a-box weapons and the motivation to use them, while their enemies have no clue about this, no similar weapons, and no conceivable responses.

Not that doom is impossible! But terrorists generally have their own enemies, they don’t want to destroy the world, and their abilities are limited. But feel free to be terrified of Trump with the nuclear football.

I’ve never read any Christopher Priest before, and The Prestige was recommended.  The library didn’t have it, but they had The Islanders, and I figured what the hell.

First, what is it?  It’s sf, of precisely the sort that explains why I use sf instead of ‘science fiction’.  It’s set on another planet, but it could easily pass as mainstream fiction, or magic realism. It reminds me of Borges, and even more of Georges Perec’s La vie mode d’emploi, which tells the story of a Paris apartment story, room by room.

The Islanders calls itself a gazetteer, and in form it purports to be a tourist’s guidebook to the Dream Archipelago, a worldwide array of thousands of islands on another planet— though honestly it’s all so British that we’d might as well call it an alternative Earth.  The planet also has two large polar continents.  One, Nordmaieure, consists of “quarrelsome nation states” engaged in a perpetual war, which in eminently civilized fashion is actually fought in the uninhabited southern continent, Sudmaieure.  The archipelago is neutral, though to get to the battlegrounds troops have to pass through it, so it is hardly unaffected by the war.

The book is arranged alphabetically, from Aay to Yannet, giving descriptions of geography and local attractions, and a listing of what currencies are accepted. It’s soon evident that this is merely a pseudo-pedantic scaffolding for telling stories about the Islands and Islanders: love stories, a murder mystery, meditations on art, some incursions into horror. The gazetteer style is frequently abandoned in favor of news reporting, court reports, memoirs, or third-person stories.

The “Introductory” by an Islander notable, Chaster Kammeston, provides a fair appraisal and fair warning: “It is a typical island enterprise: it is incomplete, a bit muddled, and it wants to be liked.” And in fact I found it extremely readable. I finished it quickly and found none of it boring.

There are standalone stories, such as one of the horror ones, classified under the island Seevl. The manner is Lovecraftian: the story starts out as something of a love story, narrated by a man named Torm, and takes its time to get to the mystery of the ruined towers that cluster on Seevl.

But many of the stories are interconnected, though unreliably. The introducer, Chaster Kammeston, explains that no true map of the archipelago is possible, due to “temporal gradients.”  Later this is explained further: if you circumnavigate an island, you’ll find that landmarks have shifted or disappeared. Getting around can be trying, and people end up in different places than they intend.  My gosh, is there some kind of subtext here?

Mirroring this, the interconnected tales don’t quite cohere. For instance, Kammeston introduces and appraises the book, but a key event in later chapters is his own funeral. One character is described as having lived 250 years ago, and yet she is described as a lover of the artist Dryd Bathurst, who lived long enough to be interviewed by Kammeston, his biographer. Kammeston’s introduction claims that he has never left his native island, but later chapters contradict this.

A key event, narrated multiple times, is the death of a mime named Commis—  killed by a plate of glass which sliced down vertically from the loft of the theater where he was performing. Later we get an account that explains what that plate of glass was for, who left it too loosely attached, and why.  Later yet we learn that one of the suspects went by another name, one that by now we know.

In some unreliable narrator stories, the idea is to piece together the real truth behind the conflicting claims. Not here, I believe.  An in-world explanation is half-suggested: perhaps the indeterminacy which afflicts the physical world of the Archipelago affects the people too.  You return to a man or woman and they’re not the same person as before.  It’s hard to believe that the introduction from Kammeston refers to the same text we’re reading, and not just because of the funeral.  Or, probably more likely, Priest is just spitting on the notion of objectivity, as is common in mainstream lit. Real life is muddled, though it’s still rare (I think) for sf narratives to be also.

As conworlding, it’s brilliant and slapdash at the same time. There’s very little attempt to make this world different from Earth— in fact he could probably have set it on Earth making no substantial changes.  There are some sf elements, but little that affects the storytelling. The indeterminacy of the world has great thematic resonance but isn’t really taken seriously.  (E.g. it’s said that no map of the Archipelago is possible, and yet people do things like plot worldwide ocean currents, to say nothing of undertaking wars on the other side of the globe.)   And as mentioned, all the islands seem British, with a side order of Scandinavian.

Yet it has a real sense of local color— you do get a sense of these islands as distinct places, so that this is that rare thing, an sf world which feels like it has more than one culture.  Torm at one point has a neat insight about continents vs. islands:

[On the mainland] I felt instead the lure of distance, of places I could travel to and people I could meet without crossing a sea, and an endlessly unfolding world of certainty. Islands lacked that. Islands gave an underlying sense of circularity, of coast, a limit to what you could achieve or where you might go. You knew where you were but there was invariably a sense that there were other islands, other places to be.

It’s hard not to feel that he’s describing both Britain, and various sub-worlds within Britain. What he describes as the mainland attitude I recognize in Americans. We have regions, but they always feel like secondary things that you can ignore if you choose.


Would you like it? If what you really like could be described as “Larry Niven again”, then probably not.  But it doesn’t have the dry cold feeling of much experimental literature; rather, it’s warm, digressive, and passionately human. I liked it (far more than the Perec, in fact).

(FWIW I spent a couple of hours reading Priest’s blog. He’s a bit of a curmudgeon, with some judgments that seem more personal than reasoned. E.g. he likes Terry Pratchett and dislikes Charlie Stross, which doesn’t seem unconnected with being an old friend of Pratchett’s. He can be pretty amusing when he rips into Martin Amis, and spectacularly condescending when he offers advice to China Miéville. Fortunately this strain of Aggrieved Blogger doesn’t get into his book.)



Here’s an interesting essay by Cory Doctorow: “The Internet Will Always Suck”.

His point is that “we always use our vital technologies at the edge of their capabilities.” The Internet sucked in 1995 because it could barely handle images; it sucks today because it can’t reliably deliver high-res movies to moving cellphones in remote areas.

1500 year old Roman comb

1500 year old Roman comb

It’s an excellent point— as technology gets better and more ubiquitous, it’s stretched, and it’ll be used in non-optimal ways, with attendant errors and frustrations. Doctorow pointedly reminds designers to plan for those error conditions… don’t succumb to the engineer’s perennial optimism that things will work as they’re supposed to, or as they do in optimal conditions at the engineer’s desk.

It’s the always that goes too far, though. We’re living in a time of hair-raisingly fast technological development, but that is almost certainly just dumb luck and won’t continue in the same way. Will the Internet still be advancing in leaps and bounds in fifty years? Maybe. In five hundred years? Almost certainly not.

Technologies do mature, and settle down in usable, predictable forms. There’s probably an example in front of you: the QWERTY keyboard, first produced by Remington in 1878, still going strong a century and a half later despite its original purpose (preventing jams on the typewriter) being entirely moot today. It’s not the best design, but it’s stable, and thus allows people to transfer their knowledge between machines and even between technologies.

Automobiles have changed in all sorts of ways in a hundred years, but the user interface of the automobile is nearly unchanged in the last half century. Your grandfather could drive your car, with maybe 30 seconds’ instruction in how to use the automatic transmission (mainly learning not to use the nonexistent clutch). As Bill McKibben points out, you’d have trouble understanding how to make a meal in the typical kitchen of 1900— but that of 1950 would be no trouble.

Many tools have had roughly the same shape and function for hundreds of years or more. The illustration is exactly what it looks like, a comb— the idea of dragging an array of hard spikes through the hair has never been surpassed.

The obvious objection is that there is improvement in automobiles, guns, hammers, pianos, coffee makers, whatever. We don’t make combs out of antler anymore. Well, of course. But we change the user-facing portions the least, and every field doesn’t see the spectacular rate of change of electronics.

Computers are still in rapid development… though I’d note that programming hasn’t advanced anywhere near as fast as computers. You can write a program in Javascript today that’s remarkably similar to the Pascal of 1970. And even if computers stop changing, business hasn’t finished thinking up all the possibilities for transforming services and production.

It’s easy enough to imagine the process continuing for another fifty years. But five hundred? Five thousand? Even sf writers can’t make that convincing; they just mutter about “weakly godlike entities” and talk about something else instead.

I’ll venture a prediction: as soon as you can have sex with robots, we’ll be done. Less provocatively phrased: we’re now trying to stream megapixel movies on demand. Imagine a few more iterations of that: moving hi-res holograms; involvement of other senses; responses to the user’s position and movement. When we have a sensorium that mimics real life… where else is there to go? You essentially have Star Trek’s holodeck… or the capabilities of Second Life in real life. Once you can near-perfectly fool the human senses, that’s all you need to do; there’s little point in a fourfold increase in speed beyond that.  All the engineers will move over to genomics, in order to make furries a reality.

(Well, there’s one more requirement: your 3-d printer needs to be able to create a pizza. Then we’re done.)

Charles Stross is my favorite living sf author, so I was happy to finally get a copy of Neptune’s Brood. In fact I foolishly decided to finish it last night, so I ended up with four hours of sleep, countered by buckets of caffein.


It’s a sequel to Saturn’s Children, but set something like 4000 years later… which means it’s effectively a new creation. The characters are no longer androids but metahumans— the difference between machine and human has greatly eroded. The heroine, Krina, is made of metal and computers, but she breathes and eats and presumably excretes, she certainly has no problem with emotions, she can have sex, and though Stross has fun with the details of non-biological life, little in the plot depends on them (unlike the earlier book).

What the book turns out to be about is debt. It begins with a quote from David Graeber and this is not accidental. Stross talks about “fast money”, “medium money”, and “slow money”. Fast money is liquid cash and credit as we know it. Medium money is basically land and other long-term stores of value. Slow money is, well, even more long-term. It’s a currency designed an interstellar civilization that doesn’t have FTL. As transactions have to be confirmed by two interstellar banks, it’s very long-term, non-liquid, stable, and safe. Slow money is essentially an artifact of space colonization: the process is so expensive that a colony starts out in enormous debt, which can generally be paid off only in millennia— or by starting a colony of one’s own.

Krina is a banking historian, with a specialty in fraud. She moves to a system named Dojima (this is done by beaming her brain-state and downloading it into a new body) to do research and find out what happened to her missing sister, and almost immediately get caught up with a) a stalker trying to kill her, b) the Church of the Fragile, an organization dedicated to preserving biological humans, despite their comic in adaptation to modern life; c) an association of pirate underwriters. The last group is the most interesting… they do things like aggressively investigate insurance fraud, and audit cargos not to steal them but to do market interventions based on them. (Within a system, travel takes months but information travels in hours, so knowing what’s on a ship is valuable information.)

More details would either be spoilery or confusing. The plot is headlong and twisty. It all fits together pretty well, even if Krina is a bit more passive that the usual Stross protagonist.

As world building, it’s fantastic. Stross calls the book a “space opera”, which more or less means that he doesn’t want to be hassled if the science isn’t 100% plausible; but in fact there’s really nothing magical about his tech. He creates one exotic and fascinating environment after another, and Krina has to adapt to each one in turn. (At one point she become a mermaid. That might be a bit of a spoiler, but it’s on the damn cover.)

You can see why Stross is Paul Krugman’s favorite sf author: he takes problems of economics, money, and debt seriously. Krina, for instance, is instantiated as a slave— that is, she’s basically a clone of her mother, and forced to work for years to earn her freedom. This isn’t simply a bit of far-futuristic oomph; it’s actually straight Graeber, and relates to the main theme of the book: what debt does to people and societies.

I have a few quibbles, mostly related to narrative. It’s a long convention in first person novels that no one really explains why they’re writing out their story, but I think Krina is particularly messy here. She explains things that should be obvious in her world, she talks as if she’s researched her own story but doesn’t really give any metanarrative on why, the book changes to third person in a few spots, a few things are sometimes told in a weird order, as if Stross suddenly realized he needed to give some backstory to an event but didn’t feel like rewriting earlier bits.

Except in the Laundry novels, I think Stross has an ongoing problem making his antagonists smart enough. Of course we want the heroine to be smarter and later threats to be larger than earlier ones, but some of the antagonists here end up being just not very clever or dangerous.

It could be argued that Stross underestimates “Fragiles”— biological life— and overestimates how stable and durable metal and electronics are. Once you can play with genes like Javascript, who knows what limits biological transhumans have? But of course this isn’t a prediction of future development; it’s just a given of this universe that civilization has become non-biological, while (as systems do) retaining the traces of its origins.

But these aren’t biggies. It’s a fun book, it goes fast, and I wish there was a Volume Three…

Next Page »