I read an article on New Scientist [link requires free registration] a couple of days ago about programming languages. The writer thinks most of them were poorly designed, that is, hard to learn, hard to use, and difficult to debug. He said that there were about 15 to 50 errors per thousand lines of code, and huge systems like Windows accumulated masses of them. “As more and more of the world is digitised”, the problem will get worse, with the potential for fatal accidents in areas such as aviation, medicine, or traffic. One solution is “user-friendly languages” which let the programmer see what they do “in real time as they tinker with the code”. Another is to design programs that write themselves, based on google searches.

 

So I’ve got a couple of questions for the guru.

 

One, what is your opinion of the article, the problem, and the suggested solutions, as an expert?

And for my second question, what qualities make a programming language good or bad? Can you rank the languages you’ve used in order of usefulness? Or is that a pointless endeavor? Are there any which are stellar, or any which you wouldn’t advise anyone to mess with?

—Mornche

The article is a bit misleading, I think. Brooks gives some examples of geeks trashing computer languages, but it should be understood that the geek statement “X IS TOTAL SHIT IT SHOULD DIE IN A FIRE” just means “X was about 95% what I like, but the rest disappointed me.” 

Like any geek, I love the opportunity to trot out my opinions on languages. When I was a lad, Basic was supposed to be easy, so everyone learned it.  It’s total shit and should die in a fire.  That is, it was disappointing. The early versions gave people some very bad habits, such as not naming variables, using numeric labels, and GOTOing all over the place— most of these things are fixed now.  Fortran is honestly little better; Cobol adds tedium for no good reason.  A lot of modern languages— C, C++, C#, Java, Javascript— are surprisingly similar, and inherit their basic conventions from Pascal.  I liked Pascal a lot (haven’t seen a compiler for it in twenty years), and I like C# almost as much. I haven’t used Ruby or Python, but looking briefly at code snippets, they look a lot like (cleaned-up modern) Basic. An experienced programmer can always learn a new language, and how crappy their programs are depends on them, not the language.

There are, of course, lots of little stupidities that have caused me a lot of problems. To take one at random, C uses = and == with different meanings, and it perversely uses == for simple equality. Pascal got this right. There are also amazingly clever bits in languages that I’d hate to do without (data hiding, for instance).   

One thing the article misses is that what’s really a pain to learn is not the mechanics of the language, but the libraries and UI frameworks.  The C family and Java are very similar, but the libraries aren’t, and that’s what will take you months to pick up.  (Unless you use .NET, which is designed to use the same library across multiple languages, so the languages themselves become a set of syntactic skins you can pick by personal preference.)

Programmers have realized before how tedious and error-prone their work is, and there have been many attempts to help, including:

  • Smarter development environments, like Visual Studio. These take care of indenting for you, they’ll check for mismatched braces and such, keywords are highlighted. You can rename a variable program-wide, or break out a section of code as a separate routine, or insert commonly used code fragments, with one command. This not only saves time, but keeps you from making common errors.
  • New paradigms— as when we switched from procedural to object-oriented programming about twenty years ago, or to Agile about ten years ago. When you’re in your 20s you get really excited about these revolutions. Crusty middle-aged people like me are a little more jaded— these methodological changes never quite live up to the hype, especially as they don’t address the management problems identified fifty years ago by Frederick Brooks: too much pressure to make impossible deadlines with inadequate tools.  (Which isn’t to say change is bad.  Object-oriented programming was an improvement, partly because it allowed much better code re-use, and partly because if it’s done right, routines are much shorter, and shorter code is more likely to work. But good lord, I’ve seen some horrifying OO code.)
  • Higher levels of abstraction. This is largely what the New Scientist article is talking about.  Earlier forms of the idea include specialized string processing languages (Snobol), simulation languages (Simula), and database specs (SQL). When I was doing insurance rating, I created an insurance rating language. Someone always has a vision of programming by moving colored blocks around or something.

A lot of programming is incredibly repetitive; all programmers recognize this. The bad programmer addresses it by copying and pasting code, so his programs consist of endless swaths of similar-but-confusingly-different routines. The good programmer addresses it by abstraction: ruthlessly isolating the common elements, handling common problems the same way (ideally with the same code), making UI elements consistent, moving as much detailed behavior as possible out of the code itself into high-level specifications. All the libraries I mentioned are just earlier programmers’ prepackaged solutions to common problems.

Often the idea is to come up with something so powerful and easy to use that it can be given to the business analyst to do. (that is, the non-programmer who’s telling the programmer how the real-world thing works).  This usually doesn’t work, because

  • the programmer’s idea of “easy” is not that of ordinary people, so the business analyst can’t really use the tools.
  • most people don’t have the programmer’s most important learned skill: understanding that computers have be told everything. Ordinary humans think contextually: you remember special case Y when Y comes up.  Programs can’t work like that– someone has to remember Y, and code for it, long before Y happens.

The reason that programming takes so long, and is so error-prone, is that no one can work out that everything all at once, in advance. The business analyst suddenly remembers something that only happens every two years on a full moon, the salesman rushes in with a new must-have feature, the other guy’s system doesn’t work like his API says, field XYZ has to work subtly differently from field WXZ, we suddenly discover that what you just wrote to the database isn’t in the database, no one ever ran the program with real data.  Abstraction in itself will not solve these problems, and often it introduces new problems of its own— e.g. the standardized solution provided by your abstraction vendor doesn’t quite work, so you need a way to nudge it in a different direction… 

Again, I don’t mean to be too cynical. When it’s done well, code generators are things of beauty— and they also don’t look much like code generators, because they’re designed for the people who want to solve a particular problem, not for coders. An example is the lovely map editing tool Valve created for Portal 2.  It allows gamers who know nothing about code or 3-d modeling to create complicated custom maps for the game. Many games have modding tools, but few are so beautifully done and so genuinely easy.

But I’m skeptical that a general-purpose code generation tool is possible.  One guy wants something Excel-like… well, he’s right that Excel is a very nice and very powerful code generator for messing with numbers. If you try using it for anything more complicated, it’s a thing of horror.  (I’ve seen Excel files that attempt to be an entire program. Once it’s on multiple worksheets, it’s no longer possible to follow the logic, and fixing or modifying it is nearly impossible.)

The other guys wants to “allow a coder to work as if everything they need is in front of them on a desk”.  I’m sure you could do some simple programs that way, but you’re not going to be able to make the sort of programs described earlier— an aircraft software suite, or Microsoft Word.  You cannot put all the elements you need on one desktop. Big programs are, as the author notes, several million lines of code.  If it’s well written, that probably means about 40,000 separate functions.  No one can even understand the purposes of those 40,000 functions— it takes a team of dozens of programmers.  Ideally there’s a pretty diagram of the architecture that does fit on a page, but it’s a teaching abstraction, far less useful— and less accurate— than an architect’s plan of a house. (Also the diagram is about two years out of date, because if there’s anything programmers hate more than other people’s programming languages, it’s documentation.)

So, in short, programmers are always building tools to abstract up from mere code, but I expect the most useful ones to be very domain-specific. Also, lots of them will be almost as awful as code, because most programmers are allergic to usability.

Plus, read this. It may not be enlightening but it should be entertaining.

 

All the fuss about the Dota 2 tournament finally got my curiosity up, and I decided to reinstall it. Steam tells me I’ve played it for 45 hours, pretty much every one of which was full of confusion and dread.

Lina does not need your petty 'armor'!  Ouch (dies)

Lina does not need your petty ‘armor’! Ouch (dies)

If you know TF2, you know it takes some time to learn to play the nine classes, and many players never bother with some of them. In Dota 2 and LoL there are over a hundred. They do break down into overall roles (pusher, support, jungler, assassin…), but their abilities vary and each has to be learned separately. Worse yet, you have to learn how to play against each one, and then you have to worry about which ones combine together well. Oh well, there’s only ten thousand possible combinations. No wonder there’s enough strategic depth to support professional competition.

So anyway, I tried some Dota 2 and never felt like I was getting it. So I decided to try out League of Legends, not least because my friend Ash works for them.

Lux sux when I play her; devastating on enemy team

Lux sux when I play her; she’s devastating on the enemy team

For what it’s worth, I think LoL is a little easier to pick up. You don’t have to worry about denies (killing your own creeps so the enemy can’t), or carriers. Plus it feels like you can use your spells a little more generously, which is more fun. But they’re really very similar games.

(Dota 2 tries to characterize the opposing teams more– they’re the Dire and the Radiant, and the art direction makes it seem like good vs evil. But any hero can play for any team, and none of it leads anywhere, so this effort seems misplaced. LoL just has Blue and Purple.)

The basics of the game are simple enough. Most of the fighting is done by hordes of NPC minions, who advance to the enemy, fight them, and destroy protective turrets. If you destroy the enemy’s farthest building, the Nexus, you win. You play a Champion, who can attack enemy minions and turrets and, more importantly, harass or kill enemy Champions.

You pretty much have to put aside your FPS reflexes. You don’t just whale on minions– you only get gold and XP if you actually kill them (getting the “last hit”). In the early game you’re weak, and it’s best to wait till you can be sure of getting that hit. You use the XP to advance in level, and the gold to buy items to enhance your skills.  You generally reserve your abilities (which have a cooldown and so must be doled out) for enemy Champions.  It takes a delicate balance to wear them down without taking too much damage yourself.  Most champions have an “ult”, a skill with high damage and long cooldown, which you want to save for a killing blow.

If you want to try it, there’s some brief tutorials, and then you can try games against bots, at three difficulty levels.  Just dive in; you’ll be matched with people of your level, so people rarely expect you to have skills you don’t know.  In bot games, in fact, people tend to be pretty quiet.  There’s no voice chat, which makes strategy a little harder but does avoid toxicity.

I’ve only played two games against humans, because then you need more skills– e.g. recognizing when enemies are missing, ‘cos then they’re probably hiding and waiting to gank you.  I won one and lost one.  The login server is down right now, or I’d be playing rather than blogging.

You can play any champion in Dota 2, but In Lol you must use a small set of free ones, or unlock them with in-game experience or actual cash dollars.  This sounds restrictive but is probably a better introduction, since it focuses your attention on learning a few champs at a time.

So, is it fun?  So far, yes.  I’m intimidated by the learning curve, but the matchmaking system means that (unlike, say, my other fave team game, Gotham City Impostors), you won’t get into a noobs-vs-gurus rout. Like any team game, it’s most fun when you play with friends, so bring a few along. 

(Don’t take any of this as a tutorial, though… it’s definitely a good idea to read a few intros and spectate some games.  Advanced guides will be incomprehensible, so alternate reading with playing against the bots to put what you know into practice.)

I ordered the proof copy of In the Land of Babblers today. So it’s on the way!

Babblers-cover-front

Once the book arrives, I’ll read the hell out of it. I always find more reading a physical copy than I do reading it in Word. Then I make corrections, and generally order another proof. So it should be ready sometime in September.

Plus there’s a companion volume– all sorts of material on Cuzei, published and not. That’s mostly done, but I may add something else to it, so it may take just a bit longer.

The novelist Charles Cumming laments that modern technology has made the traditional spy novel impossible:

If, for example, an MI6 officer goes to Moscow and tries to pass himself off as an advertising executive, he’d better make sure that his online banking and telephone records look authentic; that his Facebook page and Twitter feeds are up to date; and that colleagues from earlier periods in his phantom career can remember him when they are contacted out of the blue by an FSB analyst who has tracked them down via LinkedIn.

And that’s before you consider the smartphone, which maintains a frightening amount of data about its user, but also makes it hard for the novelist to keep that user out of the range of help.

I considered the problem myself, for the Incatena. It occurred to me today that the counter to all this is to spam the databases with fake data. Right now this would be tedious and probably not convincing… you can create a fake Facebook account, but you can hardly create two hundred fake friends for it.

But a fake-data industry could. There are fake social media accounts now, of course, but imagine a mature technology. Basically it would create a social media AIs which do almost everything humans do.

That seems like a lot of effort to make a few spies happy. But I think almost everyone would see the advantage of having multiple, realistic net avatars. You might want to keep certain activities from your parents, or your boss. Or you just don’t like everyone knowing everything about you. In the Incatena, not only Agents would wish, sometimes, to adopt another identity, and it’d be much easier if that identity already had a history, a credit account, and friends.

The corollary would be that the virtual world (the Vee, in the Incatena) would have a population several times that of meatspace– it’s a mixture of multiple avatars, AIs, and spam.  That sounds like a drawback, but I’m not sure it is– even today you don’t necessarily expect your other gamers, debate partners, fellow geeks, clients, and romantic possibilities to intersect.  Plus, whatever oddball game you play, you can find a full server for it!

Could advanced data mining see through the fakery and find the real individuals?  In part, yes, though it’d also face more advanced data fakery. But in part no, because any given avatar is ‘real’ at least part of the time. Besides, if your data mining gets too good, you invite retaliation against your own spies.

To put it another way… I don’t think the future fifty years down the road, much less 500, is the no-privacy panopticon some people fear, or seek. Very few people want that, and there will be a lot of effort to make sure it doesn’t happen.  Even today some governments are doing work offline or in person that would once have been written down or e-mailed, and others are demanding separate rules for their own nationals, whether to keep them from hearing of the existence of non-government points of view, or to keep them from being watched by the NSA. Maybe we’ll go back to the cyberpunk notion that all data is protected by vigilant daemons with beautiful graphics…

 

I’ve got another lukewarm recommendation for you!  I just finished Steven Pinker’s How the Mind Works. Pinker, like Daniel Dennett, doesn’t lack for ambition. He really wants to tell you how to design a functioning mind, or to be precise, how evolution has put ours together.

1671_the_brain_cube_inuse

His focus throughout is on evolution, so a basic restraint is that the components of the mind should have increased reproductive success. Not absolutely– we obviously use our brains in many ways that couldn’t be adaptations.  But it’s a good restraint to have, as it keeps him from accepting simplistic ideas that “intelligence is good” or that evolution is aiming at creating humanoids. (There’s a major caveat here, though: adaptation is only one process in evolution, and you have to make a case that it produced any particular feature. More on this later.)

Does he succeed?  In parts, brilliantly.  The chapter on vision is excellent. He explains exactly why vision is such a hard problem, and how the eyes and brain could put together a view of the world. Cognitive research is frustratingly indirect– we can’t really see how the software runs, so to speak. But we can put a whole bunch of clues together: how the eye works, what goes wrong when the brain is damaged, what constraints are suggested by optical illusions and glitches, how people respond to cleverly designed experiments.

As just one example, it seems that people can rotate visual images, as if they have a cheap, slow 3-D modeling program in their heads– and that this rotation takes time; certain tasks (like identifying whether two pictures depict the same object) take longer depending on the amount of rotation required. But even stranger, it’s found that people don’t just store one view of an object.  They can store several views, and solve rotation problems by rotating the nearest view. This is fascinating precisely because it’s not a solution that most programmers would think of. It makes sense for brains, which basically allow huge data stores but limited computational power.

He points out that vision is not only a difficult problem, it’s impossible.  If you see two lines at a diagonal in front of you, there is no way to determine for sure whether they’re really part of a triangle, or parallel lines moving away from you, or a random juxtaposition of two unrelated lines, and so on. The brain solves the impossible problem by making assumptions about the world– e.g. observed patches that move together belong to the same object; surfaces tend to have a uniform color; sudden transitions are probably object boundaries, and so on. It works pretty well out in nature, which is not trying to mislead us, but it’s easy to fool.  (E.g., it sure looks like there’s a hand holding a brain-patterned Rubik’s cube up there, doesn’t it? Surprise, it’s a flat computer screen!)

I also like his chapters on folk logic and emotions, largely because he defends both.  It’s easy to show that people aren’t good at book logic, but that’s in part because logicians insist on arguing in a way that’s far removed from primate life. A classic example involves the following query:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. What is the probability that Linda is a bank-teller? What is the probability that Linda is a bank-teller and is active in the feminist movement?

People often estimate that it’s more likely that Linda is a feminist bank teller, than that she’s simply a bank teller. This is wrong, by traditional logic: A ∧ B cannot be more probable than B. But all that really tells us is that our minds resist the nature of Boolean logic, which considers only the form of arguments, not their content. We love content. People’s judgments make narrative sense.  From the description of Linda, it’s clear that she’s a feminist, so a description that incorporates her feminism is more satisfying. In normal life it’s anomalous to include a bunch of information that’s irrelevant to your question.

As for emotions, it’s widely assumed that they’re atavistic, unnecessary, and positively dangerous– an AI is supposed to be emotionless, like Data. Pinker makes a good design case for emotions. In  brief, a cognitive being needs something to make it care about doing A rather than B… or doing anything at all. All the better if that something helps it avoid dangers, reproduce itself, form alliances, and detect cheaters.

So, why do I have mixed feelings about the book? A minor problem is breeziness– for instance, Pinker addresses George Lakoff’s category theory in one paragraph, and pretty spectacularly misses Lakoff’s argument. He talks about categories being “idealized”, as if Lakoff had overlooked this little point, rather than discussing it extensively. And he argues that the concept of “mother” is well defined in biology, which completely misses the distinction between ordinary language and technical terms. He similarly takes sides in the debate between Napoleon Chagnon and Marvin Harris, with a mere assertion that Chagnon is right. He rarely pauses to acknowledge that any of his examples are controversial or could be interpreted a different way.

More seriously, he doesn’t give a principled way to tell evolutionary from cultural explanations. This becomes a big problem in his chapter on sex, where he goes all in on evolutionary psychology. EP is fascinating stuff, no doubt about it, and I think a lot of it is true about animals in general. But which parts apply to humans is a much more contentious question.  (For a primer on problems with EP, see Amanda Schaffer’s article here, or P.Z. Myer’s takedown, or his direct attack on Pinker, or Kate Clancy’s methodological critique.) Our nearest ancestors are all over the map, sexually: gorillas have harems, chimpanzees have male dominance hierarchies with huge inter-chimp competition for mates (and multiple strategies); bonobos are notorious for female dominance and casual sex.  With such a menu of models, it’s all to easy to project sexist fantasies into the past. Plus, we know far less about our ancestors than we’d like, they lived in incredibly diverse environments, and evolution didn’t stop in 10,000 BC.

Plus there’s immense variety in human societies, which Pinker tends to paper over.  He often mentions a few favorite low-tech societies, but all too often he generalizes from 20C Americans.  E.g. he mentions perky breasts as a signal of female beauty… um, has he ever looked at an Asian girl, or a flapper, or a medieval painting?  Relatedly, Pinker convinces himself that men should be most attracted to a girl who is post-puberty but has never been pregnant, because she’s able to bear more children than an older woman. Kate Clancy’s takedown of hebephilia is relevant here: she points out that girls who bear children too early are likely to have less children overall, and that male chimps actually prefer females who have borne a child.

Finally, the last chapter, on art and religion, is just terrible. He actually begins the discussion of art with an attack on the supposed elitism of modern art. It’s like he’s totally forgotten that he’s supposed to be writing about evolution; what the hell does the supposed gulf between “Andrew Lloyd Webber [and] Mozart”, two Western artists separated by an evolutionary eyeblink, have to do with art in general? Couldn’t he at least have told us some anecdotes about Yanomamo art?

As for religion, he describes it as a “desperate measure… inventing ghosts and bribing them for good weather”. Srsly? Ironically, earlier in the book, he emphasizes several times that the facts about the ancestral environment are not normative, that we can criticize them ethically.  Then when it comes to religion he forgets that there’s such a thing as ethics; he just wants to make fun of the savages for their “inventions”. (I could go on all day about this blindness, but as just one point, the vast majority of believers, in any religion, invent nothing.  They accept what they’re told about the world, a strategy that is not exactly foreign to Pinker– why else does he supply a bibliography?)

On the whole, it’s probably a warning to be careful when you’re attempting a synthesis that wanders far outside your field. It might be best to skip the last two chapters, or just be resigned to a lot of eye-rolling.

I picked this up during the Steam sale, and it’s a charmer, as well as a worthy addition to the list of great games that have popped out of the indie bubble.  It’s kind of like the original Sam & Max mated with The Naked Gun.

Polygons working together

Polygons working together

Let’s start with the look, which is highly stylized– the layouts are all classy retro with simple textures, and the characters look like toddlers’ toys.  It works– I don’t think realistic human figures would have improved the game– and it probably saved a load of development time.  I should emphasize though that it’s a fully 3-D game, not some kind of point-and-click thing.

I also feel like I can’t say too much about it– it’s like The Stanley Parable, you should go in cold.  The plot is simple enough: you’re a private eye, or maybe you work for a cop, and you go on missions. These are not particularly challenging as missions… which is fine by me, I like never having to seek out a walkthrough.  You can save at any time, but I’ll tell you right now that it never proved necessary to reload.

The plot and the puzzles are really secondary; the real game is in going around seeing what you can do in the game.  Wait, did I mention yet that this is a comedy?  A lot of the humor comes when you put off the quest directions and wander around interacting with people and things.

Not all of the jokes are boffo, but it’s definitely in the Airplane!/Naked Gun mold, where if one joke doesn’t grab you, it’s OK ‘cos another will be by in a few seconds.  There’s a lot of fourth-wall breaking, a lot of surrealism, and a bunch of electronics jokes.  (Though there are robot characters, the idea I think is that the characters kind of know they’re in a computer simulation.)

The one downside is that it’s very short.  Though you’ll probably want to play it again to find all the stuff you didn’t notice the first time.

 

 

 

In reference to your recent post about Microtransactions, I was wondering what’s your take on the supposed Indie Game Soon-To-Bubble Bust. Are masses of people paying $1 for bundles of five games the reality of microtransactions in action, and, if so, is it heading for a fall?

—Tricky

I assume you’re referring to this article by developer Jeff Vogel.  Sample quotes:

Then even more developers, sincere and hard-working, looked at this frenzy and said, “I’m sick of working for [insert huge corporation name here]. I would prefer to do what I want and also get rich.” And they quit their jobs and joined the gold rush. Many of them. Many, many. Too many.

With so much product, supply and demand kicks in. Indies now do a huge chunk (if not most) of their business through sales and bundles, elbowing each other out of the way for the chance to sell their game for a dollar or less.

Now, I’m not in the business.  If Vogel’s message is “Don’t expect to make a fortune making indie games,” I’m sure he’s right, and anyway, didn’t we know that?  Most new businesses fail, and 90% of everything is crap.

Still, his article reminded me of the old Dizzy Dean quote: “Nobody goes there anymore— it’s too crowded.”

As a gamer, I think the current market is fantastic.  Before Steam, you may recall, you had to go to your local Best Buy or GameStop or whatever, and you had your choice of the current AAA titles.  Now you have publishers’ entire catalogs available, plus a slew of mid list titles, plus a pulsating scrum of tiny indie games.  And if you’re willing to wait for the next Steam sale, you can get just about any of them at a bargain.

Plus, the barrier to entry has plummeted.  You can make a mighty fine game with Unity, and an astonishing game with Unreal Engine 4.  Which means that even a one- or two-man team can produce something graphics snobs like me will buy.

It’s also good news for diversity— new kinds of games, a more varied palette of developers.

Again, 90% of indie games will be crap.  But there will be treasures, too, like Gunpoint and SpyParty.  Whether people listen to Vogel or not, whether or not there’s a bust, some people will continue to make small, neat games and some of them will even make money.

Most creative endeavors have this glut of creators— look at books or music.  A huge percentage of my friends and family have written a book, been in a band, drawn a comic, or made a game.  Being able to quit your day job is still going to be rare.

As for microtransations, I dunno. The Wikipedia article on the Humble Bundles is interesting reading; many of these sales have netted over $1 million. The maker of Dustforce reported that before their game was included in a bundle, Steam sales were about 10 a day; during the bundle it reached 50,000 a day, and afterward it remained at a higher level— 50 a day.  Seems like a win.

(My understanding is that the mobile game market is pretty much ruined for small players; I’m only talking about PC games here.  I think Steam has thrown its enormous weight behind the idea of making it easy to make, mod, and buy games, and that inhibits predatory behavior.)

The sheer number of games does raise a question: how do you know which ones are any good?  I rely a lot on a few game sites, but I’m sure I miss a lot of games.  Steam has reviews and recommendations, but neither has been very helpful. If you’re looking for an idea for a killer social media site, I’d suggest creating guides for navigating the Long Tail.

 

 

Follow

Get every new post delivered to your Inbox.

Join 130 other followers