August 2014


I read an article on New Scientist [link requires free registration] a couple of days ago about programming languages. The writer thinks most of them were poorly designed, that is, hard to learn, hard to use, and difficult to debug. He said that there were about 15 to 50 errors per thousand lines of code, and huge systems like Windows accumulated masses of them. “As more and more of the world is digitised”, the problem will get worse, with the potential for fatal accidents in areas such as aviation, medicine, or traffic. One solution is “user-friendly languages” which let the programmer see what they do “in real time as they tinker with the code”. Another is to design programs that write themselves, based on google searches.

 

So I’ve got a couple of questions for the guru.

 

One, what is your opinion of the article, the problem, and the suggested solutions, as an expert?

And for my second question, what qualities make a programming language good or bad? Can you rank the languages you’ve used in order of usefulness? Or is that a pointless endeavor? Are there any which are stellar, or any which you wouldn’t advise anyone to mess with?

—Mornche

The article is a bit misleading, I think. Brooks gives some examples of geeks trashing computer languages, but it should be understood that the geek statement “X IS TOTAL SHIT IT SHOULD DIE IN A FIRE” just means “X was about 95% what I like, but the rest disappointed me.” 

Like any geek, I love the opportunity to trot out my opinions on languages. When I was a lad, Basic was supposed to be easy, so everyone learned it.  It’s total shit and should die in a fire.  That is, it was disappointing. The early versions gave people some very bad habits, such as not naming variables, using numeric labels, and GOTOing all over the place— most of these things are fixed now.  Fortran is honestly little better; Cobol adds tedium for no good reason.  A lot of modern languages— C, C++, C#, Java, Javascript— are surprisingly similar, and inherit their basic conventions from Pascal.  I liked Pascal a lot (haven’t seen a compiler for it in twenty years), and I like C# almost as much. I haven’t used Ruby or Python, but looking briefly at code snippets, they look a lot like (cleaned-up modern) Basic. An experienced programmer can always learn a new language, and how crappy their programs are depends on them, not the language.

There are, of course, lots of little stupidities that have caused me a lot of problems. To take one at random, C uses = and == with different meanings, and it perversely uses == for simple equality. Pascal got this right. There are also amazingly clever bits in languages that I’d hate to do without (data hiding, for instance).   

One thing the article misses is that what’s really a pain to learn is not the mechanics of the language, but the libraries and UI frameworks.  The C family and Java are very similar, but the libraries aren’t, and that’s what will take you months to pick up.  (Unless you use .NET, which is designed to use the same library across multiple languages, so the languages themselves become a set of syntactic skins you can pick by personal preference.)

Programmers have realized before how tedious and error-prone their work is, and there have been many attempts to help, including:

  • Smarter development environments, like Visual Studio. These take care of indenting for you, they’ll check for mismatched braces and such, keywords are highlighted. You can rename a variable program-wide, or break out a section of code as a separate routine, or insert commonly used code fragments, with one command. This not only saves time, but keeps you from making common errors.
  • New paradigms— as when we switched from procedural to object-oriented programming about twenty years ago, or to Agile about ten years ago. When you’re in your 20s you get really excited about these revolutions. Crusty middle-aged people like me are a little more jaded— these methodological changes never quite live up to the hype, especially as they don’t address the management problems identified fifty years ago by Frederick Brooks: too much pressure to make impossible deadlines with inadequate tools.  (Which isn’t to say change is bad.  Object-oriented programming was an improvement, partly because it allowed much better code re-use, and partly because if it’s done right, routines are much shorter, and shorter code is more likely to work. But good lord, I’ve seen some horrifying OO code.)
  • Higher levels of abstraction. This is largely what the New Scientist article is talking about.  Earlier forms of the idea include specialized string processing languages (Snobol), simulation languages (Simula), and database specs (SQL). When I was doing insurance rating, I created an insurance rating language. Someone always has a vision of programming by moving colored blocks around or something.

A lot of programming is incredibly repetitive; all programmers recognize this. The bad programmer addresses it by copying and pasting code, so his programs consist of endless swaths of similar-but-confusingly-different routines. The good programmer addresses it by abstraction: ruthlessly isolating the common elements, handling common problems the same way (ideally with the same code), making UI elements consistent, moving as much detailed behavior as possible out of the code itself into high-level specifications. All the libraries I mentioned are just earlier programmers’ prepackaged solutions to common problems.

Often the idea is to come up with something so powerful and easy to use that it can be given to the business analyst to do. (that is, the non-programmer who’s telling the programmer how the real-world thing works).  This usually doesn’t work, because

  • the programmer’s idea of “easy” is not that of ordinary people, so the business analyst can’t really use the tools.
  • most people don’t have the programmer’s most important learned skill: understanding that computers have be told everything. Ordinary humans think contextually: you remember special case Y when Y comes up.  Programs can’t work like that– someone has to remember Y, and code for it, long before Y happens.

The reason that programming takes so long, and is so error-prone, is that no one can work out that everything all at once, in advance. The business analyst suddenly remembers something that only happens every two years on a full moon, the salesman rushes in with a new must-have feature, the other guy’s system doesn’t work like his API says, field XYZ has to work subtly differently from field WXZ, we suddenly discover that what you just wrote to the database isn’t in the database, no one ever ran the program with real data.  Abstraction in itself will not solve these problems, and often it introduces new problems of its own— e.g. the standardized solution provided by your abstraction vendor doesn’t quite work, so you need a way to nudge it in a different direction… 

Again, I don’t mean to be too cynical. When it’s done well, code generators are things of beauty— and they also don’t look much like code generators, because they’re designed for the people who want to solve a particular problem, not for coders. An example is the lovely map editing tool Valve created for Portal 2.  It allows gamers who know nothing about code or 3-d modeling to create complicated custom maps for the game. Many games have modding tools, but few are so beautifully done and so genuinely easy.

But I’m skeptical that a general-purpose code generation tool is possible.  One guy wants something Excel-like… well, he’s right that Excel is a very nice and very powerful code generator for messing with numbers. If you try using it for anything more complicated, it’s a thing of horror.  (I’ve seen Excel files that attempt to be an entire program. Once it’s on multiple worksheets, it’s no longer possible to follow the logic, and fixing or modifying it is nearly impossible.)

The other guys wants to “allow a coder to work as if everything they need is in front of them on a desk”.  I’m sure you could do some simple programs that way, but you’re not going to be able to make the sort of programs described earlier— an aircraft software suite, or Microsoft Word.  You cannot put all the elements you need on one desktop. Big programs are, as the author notes, several million lines of code.  If it’s well written, that probably means about 40,000 separate functions.  No one can even understand the purposes of those 40,000 functions— it takes a team of dozens of programmers.  Ideally there’s a pretty diagram of the architecture that does fit on a page, but it’s a teaching abstraction, far less useful— and less accurate— than an architect’s plan of a house. (Also the diagram is about two years out of date, because if there’s anything programmers hate more than other people’s programming languages, it’s documentation.)

So, in short, programmers are always building tools to abstract up from mere code, but I expect the most useful ones to be very domain-specific. Also, lots of them will be almost as awful as code, because most programmers are allergic to usability.

Plus, read this. It may not be enlightening but it should be entertaining.

 

All the fuss about the Dota 2 tournament finally got my curiosity up, and I decided to reinstall it. Steam tells me I’ve played it for 45 hours, pretty much every one of which was full of confusion and dread.

Lina does not need your petty 'armor'!  Ouch (dies)

Lina does not need your petty ‘armor’! Ouch (dies)

If you know TF2, you know it takes some time to learn to play the nine classes, and many players never bother with some of them. In Dota 2 and LoL there are over a hundred. They do break down into overall roles (pusher, support, jungler, assassin…), but their abilities vary and each has to be learned separately. Worse yet, you have to learn how to play against each one, and then you have to worry about which ones combine together well. Oh well, there’s only ten thousand possible combinations. No wonder there’s enough strategic depth to support professional competition.

So anyway, I tried some Dota 2 and never felt like I was getting it. So I decided to try out League of Legends, not least because my friend Ash works for them.

Lux sux when I play her; devastating on enemy team

Lux sux when I play her; she’s devastating on the enemy team

For what it’s worth, I think LoL is a little easier to pick up. You don’t have to worry about denies (killing your own creeps so the enemy can’t), or carriers. Plus it feels like you can use your spells a little more generously, which is more fun. But they’re really very similar games.

(Dota 2 tries to characterize the opposing teams more– they’re the Dire and the Radiant, and the art direction makes it seem like good vs evil. But any hero can play for any team, and none of it leads anywhere, so this effort seems misplaced. LoL just has Blue and Purple.)

The basics of the game are simple enough. Most of the fighting is done by hordes of NPC minions, who advance to the enemy, fight them, and destroy protective turrets. If you destroy the enemy’s farthest building, the Nexus, you win. You play a Champion, who can attack enemy minions and turrets and, more importantly, harass or kill enemy Champions.

You pretty much have to put aside your FPS reflexes. You don’t just whale on minions– you only get gold and XP if you actually kill them (getting the “last hit”). In the early game you’re weak, and it’s best to wait till you can be sure of getting that hit. You use the XP to advance in level, and the gold to buy items to enhance your skills.  You generally reserve your abilities (which have a cooldown and so must be doled out) for enemy Champions.  It takes a delicate balance to wear them down without taking too much damage yourself.  Most champions have an “ult”, a skill with high damage and long cooldown, which you want to save for a killing blow.

If you want to try it, there’s some brief tutorials, and then you can try games against bots, at three difficulty levels.  Just dive in; you’ll be matched with people of your level, so people rarely expect you to have skills you don’t know.  In bot games, in fact, people tend to be pretty quiet.  There’s no voice chat, which makes strategy a little harder but does avoid toxicity.

I’ve only played two games against humans, because then you need more skills– e.g. recognizing when enemies are missing, ‘cos then they’re probably hiding and waiting to gank you.  I won one and lost one.  The login server is down right now, or I’d be playing rather than blogging.

You can play any champion in Dota 2, but In Lol you must use a small set of free ones, or unlock them with in-game experience or actual cash dollars.  This sounds restrictive but is probably a better introduction, since it focuses your attention on learning a few champs at a time.

So, is it fun?  So far, yes.  I’m intimidated by the learning curve, but the matchmaking system means that (unlike, say, my other fave team game, Gotham City Impostors), you won’t get into a noobs-vs-gurus rout. Like any team game, it’s most fun when you play with friends, so bring a few along. 

(Don’t take any of this as a tutorial, though… it’s definitely a good idea to read a few intros and spectate some games.  Advanced guides will be incomprehensible, so alternate reading with playing against the bots to put what you know into practice.)

I ordered the proof copy of In the Land of Babblers today. So it’s on the way!

Babblers-cover-front

Once the book arrives, I’ll read the hell out of it. I always find more reading a physical copy than I do reading it in Word. Then I make corrections, and generally order another proof. So it should be ready sometime in September.

Plus there’s a companion volume– all sorts of material on Cuzei, published and not. That’s mostly done, but I may add something else to it, so it may take just a bit longer.