Yahtzee Croshaw not only does hilarious animated reviews of video games, he writes columns too, did you know?  His latest column is about gender diversity in video games, and it’s an excruciating near-miss.

All games do need a fatness slider

All games do need a fatness slider

The problems start with the title: “Should every game allow you to choose your gender?” Which is a straw man (and not a straw woman). No one has asked for that.  Many games are telling a story about a particular character– Batman, Chell, Sam & Max, Jade, Corvo, Lara– and it’s OK for particular characters to have a gender.  It’s when the character is Generic Space Marine or Generic Spaceship Captain or Generic Zombie Hunter or Generic Swordsperson that there is no reason to limit the player to one gender.

But it hardly matters if a choice of gender is merely aesthetic and means nothing to the game, because it can still mean something to the audience.

Here’s where Yahtzee almost gets it.  Yes, Skyrim doesn’t care if your adventurer is male or female, but it means something to the player. And you don’t have to have AAA studio resources to handle this; games as simple as Dungeons of Dredmor and Don’t Starve allow it.

…it might not be possible to separate a character from their gender. James Sunderland from Silent Hill 2 springs to mind, as a central theme of that game is frustrated male sexuality.

Ah, the GTAIV excuse– they had to have three male characters because they were “exploring masculinity”. Like just about every other damn game.  It’s not horrible to have one more male fantasy hero– it’s just extremely well trodden ground. And trying to use the game to subvert the standard male fantasy hero does not really work as well as some designers think. Your game is what the player spends 90% of their time doing, not whatever contrary thematic material you add at the end or in cutscenes. If what the player is doing is shooting, you’ve made a shooter, not a clever deconstruction of shooters.

Perhaps this confirms the existence of a lack of diversity, but I’m not sure how to fix that. Game developers do remain predominantly male through no fault of their own, and asking them, from a male perspective, to make games about a female perspective, would probably produce something rather disingenuous.

This is what we might call a Chesterstonian objection… Yahtzee is being clever, but it’s still a silly rationalization. For one thing, it’s hardly a weird radical idea for men to write female characters.  They’ve been doing it for three thousand years.  It’s something an artist should be able to do. And many games do it very well!  No one complains that FemShep, or Portal 2‘s Glados, or  Ragnar Tørnquist‘s April Ryan, are grotesquely unbelievable; quite the opposite.

Plus, you don’t know how to fix it? How about hiring female developers? Kim Swift led the team that created the well-beloved Portal; Rhianna Pratchett was the key writer on Mirror’s Edge and Tomb Raider; Roberta Williams created the King’s Quest games.

I know that it’s very easy for me, a white dude, to say that about a white-dude-dominated industry. But I don’t buy the argument that biological similarities like race or gender strongly affect whether or not the player identifies with a character.

I’m a white dude too, which is why I defer to non-whites and non-dudes on whether they identify with white dude characters. And what they report is pretty consistent: if you’re not a white dude, you have to identify with white dude characters, but you’d like to not always have to.

Yahtzee reports that he identifies more with Lara Croft than with Kratos. That’s lovely, but Lara is still a rarity– Yahtzee is not often called upon, as a gamer, to trot out his empathy skills. Non-white or non-male gamers have to do it all the time, and it gets tiring.

Plus, many of us like to see the world from other people’s perspective. I like playing female characters, and I’ve argued that they make better player characters anyway.

I don’t think that hero-damsel enforces misogyny. After all, the protagonist, the male, is the one who has it worst. He’s the one who has to put himself at pain, and even die, over and over again, in an endless cycle of torment, for the benefit of the women.

Another Chestertonian paradox. But Yahtzee seems to have forgotten that he’s talking about games– there is no pain and no death involved, he is not sacrificing himself for the pixels arranged to form a female NPC. If you’re not trying to make cute arguments, it’s obvious that the hero-damsel trope is a male power fantasy. It’s designed to make males happy; females, not so much. And that’s precisely the problem: it’s a trope that alienates half your audience.

And if I object to that, it’s because it’s lazy, and tired [...]. Hero-damsel isn’t trying, it’s too easy.

This is where he almost gets it. Yes, it’s a tired, lazy old trope. But so is, say, red meaning “stop” or “blood”, or tutorial dungeons having giant rats and goblins, or a reversal at the end of Act I. Some tropes are old and good; some are shallow but extremely narratively convenient; some should be shaken up now and then to add variety. But some are past their sell-by date– they’re narrative survivals from a time when attitudes were much more regressive. It’s good to reject them for being hackneyed; it’s also good to reject them because they’re insulting and offensive.

I do think it’s true that games could use more diversity. But when I say that, I mean diversity of ideas, thoughtfulness, and perspectives. And that takes a whole lot more than just numerically equalizing the ham sandwiches to the sausage rolls.

Another almost-gets-it moment, followed by another straw man.

Where do you think diversity of perspectives come from? From diverse people. Put a bunch of white dudes in a room, and you’ll get some variation, but you’ll get more if you add people from other genders, races, and cultures. It’s strange and frustrating to see Yahtzee take this position, when half his reviews are scathing rants about the sameness of most games. Put it together, man. Put the same white dudes in the same room all the time, and what do you think will come out?

Of course, diversity in the HR sense isn’t the only way to get new ideas. But it’s a pretty good way to start, and if you take it seriously, it’s an excellent corrective to the groupthink and conventionalism that produce cookie-cutter games.

Saw The Congress tonight, a film by Ari Folman, loosely based on one of my favorite books, The Futurological Congress by Stanisław Lem.

It’s an amazing book; the film starts slowly (and at first seems to have nothing to do with the book)– I was dubious during the first hour– but then it goes insane. In a good, Lemian way.

the-congress

The first part of the movie is set in the near future. A middle-aged actress, Robin Wright, played by Robin Wright, is approached with what she’s told will be her last film opportunity ever. The studio wants to scan her once, to turn her into a digital character… after that, they have no further need of her, and in fact she’ll be forbidden to act. It’s not spelled out what she’ll get in return, but apparently she’s desperate for cash to take care of her son (who’s going deaf and blind), so she signs the contract.

All this is presented slowly and didactically, and even when the sf elements come in– the actual scanning– it’s not satisfying. They basically want her to emote for a few hours while being photographed… weirdly, there are flashing lights making it seem like they’re taking still pictures. It’s well acted, and yet makes little sense: how could even a couple hours of performance suffice for generating decades of movies?  In the scene she goes from laughter to tears… but what about every other emotion?  Fear, disgust, anger, orgasm? It’d frankly have made more sense if they said they were scanning her brainwaves or something.

Then, we skip forward 20 years, and Wright attends a “Futurist Congress” associated with her studio, which requires taking an ampoule of some drug, which alters her and our perception.  The presentation switches to an animated movie, with a style blending ’20s pipecleaner and ’60s psychedelic animation.  The film suddenly becomes visually inventive, half playful, half nightmarish, and the plot starts to get weird as well.

This half of the film is also, surprisingly, a recognizable adaptation of Lem’s novel, with the substitution of Wright for Lem’s astronaut hero Ijon Tichy. The studio, tired of mere digitization, has switched to powerful customized hallucinogens that reshape reality.  But then things go wrong– there is some kind of rebellion– Wright is rescued by the man who was responsible for animating her, and fell in love with her.  But he can only take her to the underground sewer which the hotel managers have escaped to, with inflatable chairs and their secretaries. And with the chemicals in the air intensifiying, it’s increasingly unclear whether she’s experiencing rescues or drug-induced nightmares.

She’s cryogenically frozen and wakes up in an even stranger future– a world entirely composed of fantasy. Everyone seems happy, but she can’t adjust, and misses her son.  She seizes the chance to take one more drug, which erases all the effects of the hallucinogens… revealing a shabby brown world, back to live action. Should she stay there, or head back to the comforting fantasy world?

I appreciate high weirdness in art, but it’s all too easy to let it get out of control. Fortunately Forman keeps the story coherent– it’s not just a head trip. He grafts the whole story of Wright, her career, and her family onto Lem’s furious satire. It’s an attempt to give the story a heart, which admittedly the novel lacks.  Tichy reacts to things like we would, serving as bemused spectator and then expressing horror and outrage as he learns how the world really is– but we don’t exactly care about him.  Wright on the other hand is little but emotion; she doesn’t seem to think about anything that’s presented to her.

The book is a tour de force, a whirlwind of grotesqueries and wordplay and ideas taken to wild but logical extremes.  But perhaps it was a little too cool-headed to make a good movie.  So read the book and watch the movie… just be patient for the first 45 minutes or so.

I approved the proof of In the Land of Babblers a few days ago, created the Kindle version, and, good lord, it’s available right now. The print book is on sale at $12.56.

Babblers-cover-front

If you’re not in the US, it may take some days for the appropriate Amazon local minions to serve it up.

The proof for The Book of Cuzei arrived too. That’s 382 pages of superior supplementalness. It will take me a bit to read through it, so it’ll probably be available at the end of the month or soon after. Then the omnibus edition is a matter of stitching the two books together. If you think you want both, it’s worth waiting for that.

I had about a week in between proofing the books, which I could have spent in any number of productive ways, but instead I got a massive cold. Still feel pretty rotten, in fact, but it’s getting better.

Events in the Ukraine had me wondering about fascism, and I remembered your essay on Bush. I went back to it, and read through nearly all of Neiwert’s essay you linked. It’s interesting to read material a decade on from those events leading up to the invasion of Iraq. It was a different time.

Both your piece and Neiwert’s extended essay haven’t aged very well, no offense. We know Bush fizzled out as Iraq devolved into genocidal bloodletting and he lost serious alignment with the far right wing before the end of his second term. Frankly, coming from the extreme right background I have, I was never very worried about a Bush dictatorship. I guess my reasoning was something like, all my childhood I’d heard horror stories of how Clinton was going to hold onto office, and now as a much more liberal adult, I couldn’t put any weight to the liberal fears of a similar thing happening with Bush.

Anyway, we know Bush let his hubris and idiocy destroy the right tidal wave. But how do you view the danger of right-wing American fascism today?

It’s been 6 years of increasingly bizarre conservative extremism. It’s been easy to pass off as racists and old people, but Neiwert’s essay in particular has me wondering if this is mistaken. Certainly, it’s hard to downplay the real world successes of the extreme fringe, and the rightward track of the Republican Party is hard to ignore. And the fact that Tea Partiers and militia men are still around suggest to me that this isn’t as fringe as we on the liberal side of things would like to believe.

The question of marginalization is seemingly obvious, but by 2016 there will have been 8 years of contentious Democratic rule, an economy that hasn’t fully recovered, potentially unpopular wars in Syria and Iraq, a devolving (and increasingly fascist) climate in Europe, and the potential for a Clinton candidacy. These are all things which could push the electorate towards the GOP.

Is that GOP more likely to move further along Paxton’s five steps of fascist movements than with Bush? Violence was the missing piece for both you and Neiwert, and that hasn’t exactly changed, but a lot else seemingly has.

—Matthew

I think you’ve put your finger on a paradox: the right has become crazier and crazier, and yet the threat to democracy seems less. There were some worrying moments in 2009— wingnuts fantasizing about military coups or assassinations.  But they turned their attention to winning the House in 2010.

So, the basic answer to why they haven’t turned to violence is that they haven’t needed to. They have the House and a good chance at picking up the Sentate this year. They have 29 Republican state governors, 28 state legislatures, and 5 of 9 Supreme Court justices.   They can’t get everything they want, but they can bust unions, shut down abortion clinics, punish the electorate with austerity measures, stop gun laws, restrict voting rights, and obstruct a liberal agenda in Congress. 

As you say, times change— ten years ago they not only held the whole federal government, but it seemed (to them and to their enemies) as if they might be settling in for a long haul of governing.  But the demographics, and their own zealotry, makes that seem less and less likely. Their message is out of date and they’ve systematically outraged every constituency but straight, old, white, Christian males.  And yet there’s nothing pushing them to change in the short term. I don’t see how they can continue on this path for twenty more years… but they can easily keep going as they are for five or ten years.

I’m not too worried about the other things you mention. I don’t think Obama has any intention of restarting the war in Iraq. European politics never affects the US.  And though a Clinton presidency looked in 2008 like it would be horribly contentious, well, that’s business as usual today. 

But 2016 will be interesting. The GOP generally nominates the most centrist guy they can find— though they hate themselves for doing it. But do they have any non-crazies left?  

Edit: Forgot to add that though the vehemence of the extreme right always seems surprising, actually it’s been that way forever. You can easily recognize the Tea Party and the birthers in Richard Hofstadter’s 1964 essay “The Paranoid Style in American Politics“.

Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated—if not from the world, at least from the theatre of operations to which the paranoid directs his attention. This demand for total triumph leads to the formulation of hopelessly unrealistic goals, and since these goals are not even remotely attainable, failure constantly heightens the paranoid’s sense of frustration. Even partial success leaves him with the same feeling of powerlessness with which he began, and this in turn only strengthens his awareness of the vast and terrifying quality of the enemy he opposes.

The enemy list changes over time, but the style remains. 

I read an article on New Scientist [link requires free registration] a couple of days ago about programming languages. The writer thinks most of them were poorly designed, that is, hard to learn, hard to use, and difficult to debug. He said that there were about 15 to 50 errors per thousand lines of code, and huge systems like Windows accumulated masses of them. “As more and more of the world is digitised”, the problem will get worse, with the potential for fatal accidents in areas such as aviation, medicine, or traffic. One solution is “user-friendly languages” which let the programmer see what they do “in real time as they tinker with the code”. Another is to design programs that write themselves, based on google searches.

 

So I’ve got a couple of questions for the guru.

 

One, what is your opinion of the article, the problem, and the suggested solutions, as an expert?

And for my second question, what qualities make a programming language good or bad? Can you rank the languages you’ve used in order of usefulness? Or is that a pointless endeavor? Are there any which are stellar, or any which you wouldn’t advise anyone to mess with?

—Mornche

The article is a bit misleading, I think. Brooks gives some examples of geeks trashing computer languages, but it should be understood that the geek statement “X IS TOTAL SHIT IT SHOULD DIE IN A FIRE” just means “X was about 95% what I like, but the rest disappointed me.” 

Like any geek, I love the opportunity to trot out my opinions on languages. When I was a lad, Basic was supposed to be easy, so everyone learned it.  It’s total shit and should die in a fire.  That is, it was disappointing. The early versions gave people some very bad habits, such as not naming variables, using numeric labels, and GOTOing all over the place— most of these things are fixed now.  Fortran is honestly little better; Cobol adds tedium for no good reason.  A lot of modern languages— C, C++, C#, Java, Javascript— are surprisingly similar, and inherit their basic conventions from Pascal.  I liked Pascal a lot (haven’t seen a compiler for it in twenty years), and I like C# almost as much. I haven’t used Ruby or Python, but looking briefly at code snippets, they look a lot like (cleaned-up modern) Basic. An experienced programmer can always learn a new language, and how crappy their programs are depends on them, not the language.

There are, of course, lots of little stupidities that have caused me a lot of problems. To take one at random, C uses = and == with different meanings, and it perversely uses == for simple equality. Pascal got this right. There are also amazingly clever bits in languages that I’d hate to do without (data hiding, for instance).   

One thing the article misses is that what’s really a pain to learn is not the mechanics of the language, but the libraries and UI frameworks.  The C family and Java are very similar, but the libraries aren’t, and that’s what will take you months to pick up.  (Unless you use .NET, which is designed to use the same library across multiple languages, so the languages themselves become a set of syntactic skins you can pick by personal preference.)

Programmers have realized before how tedious and error-prone their work is, and there have been many attempts to help, including:

  • Smarter development environments, like Visual Studio. These take care of indenting for you, they’ll check for mismatched braces and such, keywords are highlighted. You can rename a variable program-wide, or break out a section of code as a separate routine, or insert commonly used code fragments, with one command. This not only saves time, but keeps you from making common errors.
  • New paradigms— as when we switched from procedural to object-oriented programming about twenty years ago, or to Agile about ten years ago. When you’re in your 20s you get really excited about these revolutions. Crusty middle-aged people like me are a little more jaded— these methodological changes never quite live up to the hype, especially as they don’t address the management problems identified fifty years ago by Frederick Brooks: too much pressure to make impossible deadlines with inadequate tools.  (Which isn’t to say change is bad.  Object-oriented programming was an improvement, partly because it allowed much better code re-use, and partly because if it’s done right, routines are much shorter, and shorter code is more likely to work. But good lord, I’ve seen some horrifying OO code.)
  • Higher levels of abstraction. This is largely what the New Scientist article is talking about.  Earlier forms of the idea include specialized string processing languages (Snobol), simulation languages (Simula), and database specs (SQL). When I was doing insurance rating, I created an insurance rating language. Someone always has a vision of programming by moving colored blocks around or something.

A lot of programming is incredibly repetitive; all programmers recognize this. The bad programmer addresses it by copying and pasting code, so his programs consist of endless swaths of similar-but-confusingly-different routines. The good programmer addresses it by abstraction: ruthlessly isolating the common elements, handling common problems the same way (ideally with the same code), making UI elements consistent, moving as much detailed behavior as possible out of the code itself into high-level specifications. All the libraries I mentioned are just earlier programmers’ prepackaged solutions to common problems.

Often the idea is to come up with something so powerful and easy to use that it can be given to the business analyst to do. (that is, the non-programmer who’s telling the programmer how the real-world thing works).  This usually doesn’t work, because

  • the programmer’s idea of “easy” is not that of ordinary people, so the business analyst can’t really use the tools.
  • most people don’t have the programmer’s most important learned skill: understanding that computers have be told everything. Ordinary humans think contextually: you remember special case Y when Y comes up.  Programs can’t work like that– someone has to remember Y, and code for it, long before Y happens.

The reason that programming takes so long, and is so error-prone, is that no one can work out that everything all at once, in advance. The business analyst suddenly remembers something that only happens every two years on a full moon, the salesman rushes in with a new must-have feature, the other guy’s system doesn’t work like his API says, field XYZ has to work subtly differently from field WXZ, we suddenly discover that what you just wrote to the database isn’t in the database, no one ever ran the program with real data.  Abstraction in itself will not solve these problems, and often it introduces new problems of its own— e.g. the standardized solution provided by your abstraction vendor doesn’t quite work, so you need a way to nudge it in a different direction… 

Again, I don’t mean to be too cynical. When it’s done well, code generators are things of beauty— and they also don’t look much like code generators, because they’re designed for the people who want to solve a particular problem, not for coders. An example is the lovely map editing tool Valve created for Portal 2.  It allows gamers who know nothing about code or 3-d modeling to create complicated custom maps for the game. Many games have modding tools, but few are so beautifully done and so genuinely easy.

But I’m skeptical that a general-purpose code generation tool is possible.  One guy wants something Excel-like… well, he’s right that Excel is a very nice and very powerful code generator for messing with numbers. If you try using it for anything more complicated, it’s a thing of horror.  (I’ve seen Excel files that attempt to be an entire program. Once it’s on multiple worksheets, it’s no longer possible to follow the logic, and fixing or modifying it is nearly impossible.)

The other guys wants to “allow a coder to work as if everything they need is in front of them on a desk”.  I’m sure you could do some simple programs that way, but you’re not going to be able to make the sort of programs described earlier— an aircraft software suite, or Microsoft Word.  You cannot put all the elements you need on one desktop. Big programs are, as the author notes, several million lines of code.  If it’s well written, that probably means about 40,000 separate functions.  No one can even understand the purposes of those 40,000 functions— it takes a team of dozens of programmers.  Ideally there’s a pretty diagram of the architecture that does fit on a page, but it’s a teaching abstraction, far less useful— and less accurate— than an architect’s plan of a house. (Also the diagram is about two years out of date, because if there’s anything programmers hate more than other people’s programming languages, it’s documentation.)

So, in short, programmers are always building tools to abstract up from mere code, but I expect the most useful ones to be very domain-specific. Also, lots of them will be almost as awful as code, because most programmers are allergic to usability.

Plus, read this. It may not be enlightening but it should be entertaining.

 

All the fuss about the Dota 2 tournament finally got my curiosity up, and I decided to reinstall it. Steam tells me I’ve played it for 45 hours, pretty much every one of which was full of confusion and dread.

Lina does not need your petty 'armor'!  Ouch (dies)

Lina does not need your petty ‘armor’! Ouch (dies)

If you know TF2, you know it takes some time to learn to play the nine classes, and many players never bother with some of them. In Dota 2 and LoL there are over a hundred. They do break down into overall roles (pusher, support, jungler, assassin…), but their abilities vary and each has to be learned separately. Worse yet, you have to learn how to play against each one, and then you have to worry about which ones combine together well. Oh well, there’s only ten thousand possible combinations. No wonder there’s enough strategic depth to support professional competition.

So anyway, I tried some Dota 2 and never felt like I was getting it. So I decided to try out League of Legends, not least because my friend Ash works for them.

Lux sux when I play her; devastating on enemy team

Lux sux when I play her; she’s devastating on the enemy team

For what it’s worth, I think LoL is a little easier to pick up. You don’t have to worry about denies (killing your own creeps so the enemy can’t), or carriers. Plus it feels like you can use your spells a little more generously, which is more fun. But they’re really very similar games.

(Dota 2 tries to characterize the opposing teams more– they’re the Dire and the Radiant, and the art direction makes it seem like good vs evil. But any hero can play for any team, and none of it leads anywhere, so this effort seems misplaced. LoL just has Blue and Purple.)

The basics of the game are simple enough. Most of the fighting is done by hordes of NPC minions, who advance to the enemy, fight them, and destroy protective turrets. If you destroy the enemy’s farthest building, the Nexus, you win. You play a Champion, who can attack enemy minions and turrets and, more importantly, harass or kill enemy Champions.

You pretty much have to put aside your FPS reflexes. You don’t just whale on minions– you only get gold and XP if you actually kill them (getting the “last hit”). In the early game you’re weak, and it’s best to wait till you can be sure of getting that hit. You use the XP to advance in level, and the gold to buy items to enhance your skills.  You generally reserve your abilities (which have a cooldown and so must be doled out) for enemy Champions.  It takes a delicate balance to wear them down without taking too much damage yourself.  Most champions have an “ult”, a skill with high damage and long cooldown, which you want to save for a killing blow.

If you want to try it, there’s some brief tutorials, and then you can try games against bots, at three difficulty levels.  Just dive in; you’ll be matched with people of your level, so people rarely expect you to have skills you don’t know.  In bot games, in fact, people tend to be pretty quiet.  There’s no voice chat, which makes strategy a little harder but does avoid toxicity.

I’ve only played two games against humans, because then you need more skills– e.g. recognizing when enemies are missing, ‘cos then they’re probably hiding and waiting to gank you.  I won one and lost one.  The login server is down right now, or I’d be playing rather than blogging.

You can play any champion in Dota 2, but In Lol you must use a small set of free ones, or unlock them with in-game experience or actual cash dollars.  This sounds restrictive but is probably a better introduction, since it focuses your attention on learning a few champs at a time.

So, is it fun?  So far, yes.  I’m intimidated by the learning curve, but the matchmaking system means that (unlike, say, my other fave team game, Gotham City Impostors), you won’t get into a noobs-vs-gurus rout. Like any team game, it’s most fun when you play with friends, so bring a few along. 

(Don’t take any of this as a tutorial, though… it’s definitely a good idea to read a few intros and spectate some games.  Advanced guides will be incomprehensible, so alternate reading with playing against the bots to put what you know into practice.)

I ordered the proof copy of In the Land of Babblers today. So it’s on the way!

Babblers-cover-front

Once the book arrives, I’ll read the hell out of it. I always find more reading a physical copy than I do reading it in Word. Then I make corrections, and generally order another proof. So it should be ready sometime in September.

Plus there’s a companion volume– all sorts of material on Cuzei, published and not. That’s mostly done, but I may add something else to it, so it may take just a bit longer.

Follow

Get every new post delivered to your Inbox.

Join 136 other followers