Twitter redesigned by twits

Have you seen the new Twitter design yet?  If so, God help you.  Rarely does a site make itself completely unusable like this.

Here’s what the site looks like for me now:


Note, this is a 1920×1080 screen, and I can see two tweets. You can change the font size, but it barely helps. Why does over half the screen width have to be given over to blank space, their stupid menu, and their stupid trends?

The real killer, though, is that they’ve removed the option to list tweets in order received. You can click the little stars to the right of “Home” to get an option to display them that way… only it will reset it every time you visit.  Twitter, why is this so hard to understand?  I want to see all the tweets since I was last on. That’s it, that’s what I look at Twitter for.  When you show them to me out of order, then I can’t do this.  I get to a tweet I’ve read and it’s anyone’s guess if I’ve got to the old tweets or not.

And then, on the Mac version, there’s some bug where it’s extremely slow to type a tweet.  Again, jeez, the whole point of Twitter is writing tweets; how did they manage to make that harder?

I’d honestly give up on it, but an alert reader pointed me to, which provides an alternative and more workable view.


Now they’re only using 1/4 of the screen.  But oh golly, at least I can see four tweets.  There’s no way to make the columns wider, but it’s– barely– usable.  And it doesn’t rearrange the timeline.

(But good lord is it ugly.  And it still requires excessive scrolling.)

(OK, one plus side: I just realized I could create a list and display it in another column. Though that’s redundant with the first column, it makes it more likely that I’ll see the people in that list.)

I know people always hate redesigns, but I don’t that’s what’s going on.  I can’t imagine a world where, in two months, I’m happy with being able to see less, type with more difficulty, and not be able to see the latest tweets.  They’re game-killers.  At least there’s the slightly less stupid Tweetdeck for now.



Waterfall projects

There’s an interesting discussion here on waterfall project management– answering the question “Why did it fall out of style?”

113 Mill

Good thing I didn’t have to find a picture of an agile

A couple people make the point that “waterfall” was invented as a straw man by Winston Royce. He claims that the main problem with the supposed method was that testing only occurred at the end of the process, so that redesign was prohibitive.  This is absurd; it’s on the level of saying that Agile projects never think in advance what they’re going to deliver.

The first answer on that page is basically party line nonsense. Dude complains that “5 years, tens of millions of dollars, hundreds of people, not a single user, the code never ran once”, and somehow we don’t do that anymore.

The best response to this is to go re-read Frederick Brooks’s The Mythical Man-Month, published back in 1975, a set of ruminations on why big projects go wrong which everyone has read and no one ever takes to heart. Note that Brooks’s book is an autopsy of several huge 1960s IBM projects which finished, but didn’t go well. Nowhere does he say the problem was that testing was put off till the end; Brooks advocated a thoroughness of testing and scaffolding that few developers could match.

The basic idea of waterfall programming is that you write a requirements document, then a full system specification, then an internal architecture, then code. I don’t think anyone ever has believed that these processes are hermetically sealed.

And when it’s done well, it works! We used to actually do all this when I worked at SPSS. There was a design department responsible for the specifications. They worked out all the functionality, every statistical function, every dialog box, and wrote it all down. Because they were a dedicated department, they could be experts in the problem domain and in user interface, and they could work pretty fast, and get going before the developers were fully staffed. The design document wasn’t just for the coders; it was for sales, management, documentation, and QA. All these people knew what was in the system, and could get going on their jobs, before the program was written. There were big meetings to review the design document, so everyone had a chance to offer input (and no one could say they were surprised at the design).  More on the SPSS process here.

So, waterfall works just fine… if you do it. The problems people perceived with the process are not due to something wrong with the model of design/code/test.  They’re generally due to not following the waterfall method, i.e.

  • not actually doing an external design
  • not actually doing an internal design
  • not even knowing the features when coding starts
  • not actually planning for testing
  • coders generally being pretty bad UI designers (see: Alan Cooper)

Now, maybe most workplaces and devs are hopeless, and can’t do the waterfall process, so they should do something else.  But it’s not really the case that the process “didn’t work”.  It works if you let it work.

As a cranky middle-aged old guy, I’d also answer the original question (“why did waterfall go out of style”) with “Fashion.”  Every ten years, someone comes around with a Totally New Programming Style. In the ’90s it was object-oriented programming; in the ’00s it was Agile. If you’re in your 20s or 30s, you can get very excited over the latest revolution and you’re eager to make it work.  If you’re in your 40s or 50s, you’ve seen several of these revolutions, and they never live up to their original hype.

This isn’t to say that there aren’t advances! I still think OOP was a pretty good advance; it made people think about design and re-use a lot more; plus, if done well, it had the huge advantage that any given routine was small, and thus more likely to be correct. (On the other hand, done badly, it produces huge swaths of nearly-useless layers and increases complexity.)

I haven’t actually worked on an Agile project (I write books now), so I can’t say if it works out well in practice.  From what I’ve heard, it has at least two fantastic ideas:

  • Keep the code buildable.  As developers are constantly breaking things, this seems like a great discipline to me.
  • Keep close (daily) contact with everyone. I’ve had too much experience with developers who were assigned a task and it was only discovered 2 or 6 months later that they couldn’t get it done, so the early warning also sounds great.

But the insistence on short sprints of work sparks my skepticism. There really are things that take a year to program. Yes, you can and should break them down into pieces; but those pieces will probably not be something deliverable.

I’ll give an example: one company I worked for did insurance rating software. This involved an incredible number of tiny variations. You’d have general rules for a line of insurance; then state variations; then variations over time; then variations between companies.  Our original method was to write a basic set of code, then write snippets of code that changed for each state/version/company. Writing all this took a lot of time.

Eventually I decided that what we needed was a specialized rating language– basically something that business analysts could handle directly, rather than coders.  It worked very well; 15 years after I left, the company was still using a version of my system.

The point, though: writing the language interpreter, and the proof of concept rules for one line of insurance, took time. Half a year, at least.  There was no way to divide it into two-week sprints with customer deliverables, and daily meetings would have just got in my way.

I can understand that doing things this way is a risk– you have to trust the programmer. On the other hand, I’d also point out that I took that project on after about six years at the company, and it was my third architecture for the system. (Brooks rightly points out how second projects go astray, as programmers do everything they ever dreamed of doing, and the project gets out of hand.)



Why NLP is so hard

Recently I wrote about a commercial NLP project that bit off more than it could chew. In response an alert reader sent me a fascinating paper by Ami Kronfeld, “Why You Still Can’t Talk to Your Computer”.  It’s unfortunately not online, and Kronfeld is sadly no longer with us, but it was presented publicly at the International Computer Science Institute, so I figure it’s fair game.

Kronfeld worked for NLI (Natural Language Incorporated), which produced a natlang interface to relational databases. The project was eventually made part of Microsoft SQL Server (apparently under the name English Query), but it was allowed to die away.

It worked pretty well— Kronfeld gives the sample exchange:

Does every department head in Center number 1135 have an office in Berkeley?
[Answer: “No. All heads that work for center number 1135 are not located in an office in Berkeley”]

Who isn’t?
[Answer: Paul Rochester is the head not located in an office in Berkeley that works in center number 1135]

He points out that language and relational databases share an abstract structure: they have things (nouns, entities) which have properties (adjectives, values) and relate to one another (verbs, cross-references). This sort of matchup doesn’t always occur nicely.  (E.g. your word processor understands characters and paragraphs, but it hasn’t the slightest idea what any of your words mean.)

But the interesting bit is Kronfeld’s analysis of why NLI failed. One aspect was amusing, but also insightful: we humans don’t have a known register for talking to computers. For instance, one executive sat down at the NLI interface and typed:

How can we make more money?

The IT guys reading this are groaning, but the joke’s on us. If you advertise that a program can understand English, why be surprised that people expect that it can understand English?

Curiously, people attempting to be “computery” were no easier to understand:

Select rows where age is less than 30 but experience is more than 5

This seems to be an attempt to create an on-the-fly pidgin between SQL and English, and of course the NLI program could make nothing of it.

Of course there were thousands of questions that could be properly interpreted. But the pattern was not obvious. E.g. an agricultural database had a table of countries and a table of crops.  The syntactic template S grow O could be mapped to this— look for S in the country table, O in the crops— allowing questions like these to be answered:

  • Does Italy grow rice?
  • What crops does each country grow?
  • Is Rice grown by Japan?
  • Which countries grow rice?

But then this simple question doesn’t work:

  • Does rice grow in India?

Before I say why, take a moment to guess.  We have no trouble with this question, so why does the interface?

The answer: it’s a different syntactic template.  S grows in O is actually the reverse of our earlier template— before, the country was growing things, here the rice is growing, all by itself, and a location is given in a prepositional phrase. As I said before, language is fractally complicated: you handle the most common cases, and what remains is more complicated that all the rules you’ve found so far.

Now, you can of course add a new rule to handle this case.  And then another new rule, for the next case that doesn’t fit.  And then another.  Kronfeld tells us that there were 700 separate rules that mapped between English and the database structure.  And that’s one database.

So, the surprising bit from Kronfeld’s paper is not “natural language is hard”, but that the difficulty lives in a very particular area: specifying the semantics of the relational database. As he puts it:

I realized from the very start that what was required for this application to work was nothing short of the creation of a new profession: the profession of connecting natural language systems to relational databases.

So, that’s a way forward if you insist on having a natlang interface for your database!  NLP isn’t just a black box you can tack on to your program. That is, parsing the English query, which is something you could reasonably assign to third-party software, is only part of the job.  The rest is a detailed matchup between the syntactic/semantic structures found, and your particular database, and that’s going to be a lot more work than it sounds like.



No, you don’t need a natural language interface

At Mefi, someone linked to this very interesting account of failure at a programming startup. There’s a lot to say about Lawrence’s story, starting with the fact that, now 40 years on, dev shops still don’t understand The Mythical Man-Month. Also, that Agile does not give you the magical ability to skip the step where you work out the architecture and internal APIs of your app.

But I want to focus on this: “The idea is brilliant: Natural Language Processing as an interface to interact with big Customer Relationship Management tools such as SAP.”

I’m’a let you finish, but no, it’s not brilliant.


Instead of the user clicking a button to edit old Contacts, they would type, “I want to edit the info about my contact Jenny Hei,” and we would then find the info about Jenny Hei and offer it to the user so they could edit it. That was the new plan.

This was a brilliant idea. Salespeople hate software. They are good dealing with people, but they hate buttons and forms and all the other junk that is part of dealing with software.

People always think that the ideal program would understand English, so all you had to do is talk to it about your problem, and it goes and does it.

At least one early example is Asimov’s “Second Foundation”, from 1949. A teenage girl, Arkady, is using a word processor to type a paper (which happens to give us the exposition for the story).  But she’s interrupted by a real world conversation, the conversation gets recorded in the paper, and she apparently has no way to edit or delete the extra comments– the paper is ruined and she has to start over. What a horrible UI!

Let’s go back to Lawrence’s example.  Posit, for a moment, that the UI works as intended: you can type

I want to edit the info about my contact Jenny Hei.

and the app gets ready to do just that. Awesome, right?

Yes, the first time. When the alternative is looking over an unfamiliar program to find the Contacts button… let’s not even talk about having to watch Youtube tutorials to learn how to use the program, as I’ve had to do with Blender… then just speaking an English sentence sounds very attractive.

The second time, that’s OK too.  The tenth time, especially ten times in a row… you’re going to wonder if there’s a better way.  The thousandth time, you’re going to curse the programmer and his offspring to the third generation.

If you’re a programmer… is this how you want to program?  Do you normally write in COBOL?


Admit it, you thought it was pretty neat when C let you say


rather than

a = a + 1

If you actually had to use Lawrence’s interface, you’d breathe an enormous sigh of relief if someone installed a mod that let you type


And you’d be even happier if the mod allowed you to hit the Contacts button, type J in the search box which is automatically enabled, and hit enter.

It’s not that interfaces can’t get too arcane!  You can get a lot done if you know EMACS really well… but for most people it’s about as easy to master as quantum mechanics.  A WYSIWYG word processor is much nicer.  But notice that we don’t edit by saying






Who has time to type all that?  Or say it, for that matter?

Would you want to drive your car that way?  No, for the same reason you wouldn’t want to drive it with the WASD keys.  Spoken language is just not very precise.  (Do you think you could direct a robot on how to change lanes?  First, how do you communicate exactly how far to turn the wheel?  Second, you probably don’t know how yourself— only your cerebellum knows.)

And this is all assuming that you can program a computer to understand spoken commands. Lawrence’s team evidently didn’t realize that what they were asked to do was implement an AI, at a level that has never been done.

What you probably can do is a more forgiving COBOL. That is, you create a toy world, not unlike Terry Winograd’s SHRDLU, and work out an English-like code to manage it which is rigid in its own way, but happens to recognize a lot of keywords in different orders.  For instance, maybe it can handle all of

I want to edit the info about my contact Jenny Hei

Edit the record under Contacts for Jenny Hei

Search for Jenny Hei in Contacts.

Find Jenny Hei using the Contacts file and let me edit.

Good work!  Now are you quite sure you also allowed these?

I should like to modify the particulars about Jenny Hei, a contact.

Get me Contacts; I’m’a edit Jenny Hei’s record.

Change Jenny Hei’s name to Mei.  She’s under Contacts.

That record I added yesterday.  Let me change it.

Lemme see if I… I mean, I need to check the spelling… just let me see the H’s, OK?  Oh I’m talking about the ‘people I know’ feature.

Is there a Hei in the Contacts thingy?  It might be Hai.  First name is Jennifer.  Did I record it as Jenny?

Natural language is hard. It’s fractally hard. You may be able to interpret simple sentences– like in all those Infocom games of the ’80s– but actual language just throws on construction after construction.  Linguists have been writing about English syntax for more than sixty years and they’re not done yet.

And that’s before we even get into incomplete or ambiguous queries! The user leaves off the key word “Contacts”, or isn’t clear if they’re adding or editing, or gives the name wrong, or gives the name right only it’s recorded wrong in the database, or gives all the edits before saying what they apply to, or the name in question sounds like a command, or the user is malicious and insists that the contact’s name is Robert’); DROP TABLE students;– …

The more you produce the illusion that your app is intelligent, the more users will assume it’s way more intelligent than it is. And when that fails, they will be just as annoyed and frustrated as if they had to learn to push the Contacts button in the first place.

I know a bunch of people are jumping up and down and saying But Siri! Well, first, Google “siri fails”… this could eat up your whole morning. Siri is quite impressive (and has a megacorporation behind its extensive programming), but it has a relatively limited domain (the basic apps on your phone), and also– so far as I know, I don’t have an iPhone– it can’t get deeply into trouble, so its errors are funny rather than devastating.

One of programmers’ oldest dreams, or snares, is to write an interface that’s so simple to use that the business analyst can write most of the app.  I’ve fallen for this one myself, more than once!  The sad truth is even if you do this task pretty well, non-programmers aren’t going to be able to use it.  To program, you have to think like a programmer, and this doesn’t change just because you make the code look like English sentences. I’ve addressed this before; the basic point is, it doesn’t come easily to non-programmers to think in small steps, to remember all the exceptions and hard cases before they come up, or to understand the data structure implied by a process.

Again, this isn’t to say that most app UIs are OK.  Nah, they’re mostly horrible.  But a) people will learn them anyway if they have to, and b) improving them is almost never a matter of making them more like natural language.

Lawrence notes that “salespeople hate software”, and I’m sure he’s right. However, he focuses on the “forms and buttons”, as if these were the sticking point.  They’re not.  Salespeople like making money– Joel Spolsky joked that they’re like The Far Side‘s Ginger


except that instead of only hearing “Ginger” they only hear “money”. Which is great! Software companies need people to go out and make sales. But salespeople are not jazzed about doing paperwork, or database work, which includes editing the contact page in SAP for Jenny.

The irony is that Lawrence, later in the article, runs into exactly the same situation with other developers, but doesn’t make the connection. What devs hate doing is documentation. Lawrence wants his fellow devs to keep a couple pages in the wiki up to date with their APIs, and they just won’t do it, unless he nags them to death.  Is this a UI problem, as Lawrence thinks SAP has?  No, it’s a motivation problem, or a mental skillset problem, or something… and whatever it is, it’s even harder than natural language programming.

(All this doesn’t mean a natural language interface would never be a good idea.  Though come to think of it, large companies that only handle voice recognition for their customer service phone number… that sucks.  They’s too slow and they fail at recognition half the time.  So if you’re a programmer writing an app used by the executives of those companies, that’s when you write a program that requires spoken natural language input.)

Edit: One more thought. Talking about editing the contact list presupposes that the user understands “editing” and “the contact list”. In this context, this is supplied by SAP itself: the customers can be presumed to understand that application’s processes and categories. Right?  It’d be interesting to know how close the customers’ user model is to the actual workings of the product. (Hint: don’t assume it’s very close.)

If a user agent was really smart enough to understand English, I’d expect it to be smart enough to fill in gaps. Do you need to specify “Contacts” if there is only one Jenny Hei in any name field?  If there’s only one Jenny under Contacts, can I leave out “Hei”? Can I define my own categories and tags and use those?  If I were talking to a human, I could say “I’m meeting Jenny on the 18th” and they’d break that down into steps: find the table Jenny is in, add a note about the meeting, find the calendar, add the appointment, set up an alarm for the 17th.  If your app can’t do all this, you don’t have “natural language processing”, you have a verbose but limited special command language.


Computers for 94-year-olds

We finally got a new computer for my Dad.  His old one, which was more than 10 years old, was slow and generally horrible, and I decided it was finally a quality of life issue: it was just too painful to check e-mail; plus it would randomly turn itself off…

The new one is an “All-in-one”, which turns out to mean the computer itself is part of the monitor. Technology, it’s amazing.  It was about $310… there was a $330 one too, and the sales guy won some honesty points for admitting that there was no difference between them.

What he needs, basically

Dad is struggling manfully to adapt to the new system.  You don’t fully recognize, till you set up a system for a 94-year-old, how much computers do to confuse 94-year-olds.  Such as:

  • Changing everything for no apparent reason. You can drive a car from 2004 with no difficulty, but developers think that everything must be done differently. I got him a copy of Word, and it’s like Martian Alien Word all of a sudden. The File menu goes to a completely different screen… whose idea was that? The basic needs for editing a document hasn’t changed, but they’ve messed around horribly with the interface.
  • Pop-ups from virus checkers and the computer manufacturer; required updates from Windows. All things that make the computer do unexpected things he doesn’t know how to respond to. Some of them show (shudder) the Metro screen.
  • Windows is better at keeping programs working than the Mac, but still, his old photo software doesn’t work on the new machine. Fortunately Windows itself is able to get pictures off his camera. (I will probably have to do this for him, but that’s OK– I wasn’t sure it’d work at all.)
  • Not enough options for large type. I switched to a larger Windows font, but it’s still pretty small for him.

This probably makes him sound worse off than he is. He’s a smart guy; in his ’80s, when he got the old computer, he read up on it and figured it out. But it takes him extra time to learn new things, even seemingly simple things like “the favorites menu now lives on the right side of the window.”  We recently got him watching DVDs on the computer, and he’s figured it out except that he never remembers that space bar will start/stop the show.

I’m aware that there are “old-people computers” that supposedly simplify the main tasks old folks want to do. But even those would be basically a new operating system he’d have to learn, plus I don’t know if he could open his old Word documents. Plus they’re kind of expensive.

Anyway, my point is, if you’re young enough, this amount of learning new things is not bad, and can even be fun and exciting. When you’re my Dad’s age, novelty for the sake of novelty is just baffling; it’d be better if things just worked as they always did, only faster.

Ask Zompist: Programming languages

I read an article on New Scientist [link requires free registration] a couple of days ago about programming languages. The writer thinks most of them were poorly designed, that is, hard to learn, hard to use, and difficult to debug. He said that there were about 15 to 50 errors per thousand lines of code, and huge systems like Windows accumulated masses of them. “As more and more of the world is digitised”, the problem will get worse, with the potential for fatal accidents in areas such as aviation, medicine, or traffic. One solution is “user-friendly languages” which let the programmer see what they do “in real time as they tinker with the code”. Another is to design programs that write themselves, based on google searches.


So I’ve got a couple of questions for the guru.


One, what is your opinion of the article, the problem, and the suggested solutions, as an expert?

And for my second question, what qualities make a programming language good or bad? Can you rank the languages you’ve used in order of usefulness? Or is that a pointless endeavor? Are there any which are stellar, or any which you wouldn’t advise anyone to mess with?


The article is a bit misleading, I think. Brooks gives some examples of geeks trashing computer languages, but it should be understood that the geek statement “X IS TOTAL SHIT IT SHOULD DIE IN A FIRE” just means “X was about 95% what I like, but the rest disappointed me.” 

Like any geek, I love the opportunity to trot out my opinions on languages. When I was a lad, Basic was supposed to be easy, so everyone learned it.  It’s total shit and should die in a fire.  That is, it was disappointing. The early versions gave people some very bad habits, such as not naming variables, using numeric labels, and GOTOing all over the place— most of these things are fixed now.  Fortran is honestly little better; Cobol adds tedium for no good reason.  A lot of modern languages— C, C++, C#, Java, Javascript— are surprisingly similar, and inherit their basic conventions from Pascal.  I liked Pascal a lot (haven’t seen a compiler for it in twenty years), and I like C# almost as much. I haven’t used Ruby or Python, but looking briefly at code snippets, they look a lot like (cleaned-up modern) Basic. An experienced programmer can always learn a new language, and how crappy their programs are depends on them, not the language.

There are, of course, lots of little stupidities that have caused me a lot of problems. To take one at random, C uses = and == with different meanings, and it perversely uses == for simple equality. Pascal got this right. There are also amazingly clever bits in languages that I’d hate to do without (data hiding, for instance).   

One thing the article misses is that what’s really a pain to learn is not the mechanics of the language, but the libraries and UI frameworks.  The C family and Java are very similar, but the libraries aren’t, and that’s what will take you months to pick up.  (Unless you use .NET, which is designed to use the same library across multiple languages, so the languages themselves become a set of syntactic skins you can pick by personal preference.)

Programmers have realized before how tedious and error-prone their work is, and there have been many attempts to help, including:

  • Smarter development environments, like Visual Studio. These take care of indenting for you, they’ll check for mismatched braces and such, keywords are highlighted. You can rename a variable program-wide, or break out a section of code as a separate routine, or insert commonly used code fragments, with one command. This not only saves time, but keeps you from making common errors.
  • New paradigms— as when we switched from procedural to object-oriented programming about twenty years ago, or to Agile about ten years ago. When you’re in your 20s you get really excited about these revolutions. Crusty middle-aged people like me are a little more jaded— these methodological changes never quite live up to the hype, especially as they don’t address the management problems identified fifty years ago by Frederick Brooks: too much pressure to make impossible deadlines with inadequate tools.  (Which isn’t to say change is bad.  Object-oriented programming was an improvement, partly because it allowed much better code re-use, and partly because if it’s done right, routines are much shorter, and shorter code is more likely to work. But good lord, I’ve seen some horrifying OO code.)
  • Higher levels of abstraction. This is largely what the New Scientist article is talking about.  Earlier forms of the idea include specialized string processing languages (Snobol), simulation languages (Simula), and database specs (SQL). When I was doing insurance rating, I created an insurance rating language. Someone always has a vision of programming by moving colored blocks around or something.

A lot of programming is incredibly repetitive; all programmers recognize this. The bad programmer addresses it by copying and pasting code, so his programs consist of endless swaths of similar-but-confusingly-different routines. The good programmer addresses it by abstraction: ruthlessly isolating the common elements, handling common problems the same way (ideally with the same code), making UI elements consistent, moving as much detailed behavior as possible out of the code itself into high-level specifications. All the libraries I mentioned are just earlier programmers’ prepackaged solutions to common problems.

Often the idea is to come up with something so powerful and easy to use that it can be given to the business analyst to do. (that is, the non-programmer who’s telling the programmer how the real-world thing works).  This usually doesn’t work, because

  • the programmer’s idea of “easy” is not that of ordinary people, so the business analyst can’t really use the tools.
  • most people don’t have the programmer’s most important learned skill: understanding that computers have be told everything. Ordinary humans think contextually: you remember special case Y when Y comes up.  Programs can’t work like that– someone has to remember Y, and code for it, long before Y happens.

The reason that programming takes so long, and is so error-prone, is that no one can work out that everything all at once, in advance. The business analyst suddenly remembers something that only happens every two years on a full moon, the salesman rushes in with a new must-have feature, the other guy’s system doesn’t work like his API says, field XYZ has to work subtly differently from field WXZ, we suddenly discover that what you just wrote to the database isn’t in the database, no one ever ran the program with real data.  Abstraction in itself will not solve these problems, and often it introduces new problems of its own— e.g. the standardized solution provided by your abstraction vendor doesn’t quite work, so you need a way to nudge it in a different direction… 

Again, I don’t mean to be too cynical. When it’s done well, code generators are things of beauty— and they also don’t look much like code generators, because they’re designed for the people who want to solve a particular problem, not for coders. An example is the lovely map editing tool Valve created for Portal 2.  It allows gamers who know nothing about code or 3-d modeling to create complicated custom maps for the game. Many games have modding tools, but few are so beautifully done and so genuinely easy.

But I’m skeptical that a general-purpose code generation tool is possible.  One guy wants something Excel-like… well, he’s right that Excel is a very nice and very powerful code generator for messing with numbers. If you try using it for anything more complicated, it’s a thing of horror.  (I’ve seen Excel files that attempt to be an entire program. Once it’s on multiple worksheets, it’s no longer possible to follow the logic, and fixing or modifying it is nearly impossible.)

The other guys wants to “allow a coder to work as if everything they need is in front of them on a desk”.  I’m sure you could do some simple programs that way, but you’re not going to be able to make the sort of programs described earlier— an aircraft software suite, or Microsoft Word.  You cannot put all the elements you need on one desktop. Big programs are, as the author notes, several million lines of code.  If it’s well written, that probably means about 40,000 separate functions.  No one can even understand the purposes of those 40,000 functions— it takes a team of dozens of programmers.  Ideally there’s a pretty diagram of the architecture that does fit on a page, but it’s a teaching abstraction, far less useful— and less accurate— than an architect’s plan of a house. (Also the diagram is about two years out of date, because if there’s anything programmers hate more than other people’s programming languages, it’s documentation.)

So, in short, programmers are always building tools to abstract up from mere code, but I expect the most useful ones to be very domain-specific. Also, lots of them will be almost as awful as code, because most programmers are allergic to usability.

Plus, read this. It may not be enlightening but it should be entertaining.


So you want to live on a seastead

When I hear about libertarians who want to seastead, the jokes, like the sea-waters bursting a wall built by sub-minimum-wage contractors, just flow.  It’s impossible not to think about Rapture.

Leading the world in waterproof neon tubing
Leading the world in waterproof neon tubing

Nonetheless it’s interesting to read Charlie Loyd’s take on the idea. Loyd lived for years on a geographic anomaly– Waldron Island, in the Salish Sea between Washington State and Vancouver. The island has no stores, no public transport to the mainland, and about a hundred residents. So he groks the appeal of isolation (and islands).

At the same time, having actually done it, he’s aware, unlike the Randian isolationists, of just how much he depends on a vast interconnected human community. When you’re the last link in the supply chain– when you have to physically haul your water and groceries and gas out of the boat– you become more aware of what a complex monster it is. Randites don’t realize that they already live in Galt’s Gulch– that they live in a highly artificial island where the people who build and maintain it have been airbrushed out of the picture.  Moving to a physical island would actually decrease their isolation; they’d be confronted by their dependence on a billion other people.

He talks a fair bit about Silicon Valley dudebros, and it makes me wonder if anyone has attempted to correlate political views with code quality. Of course, you can despise people and write good code… indeed, development is an excellent field for people who hate people!  But can you despise community and write good code? I’d suspect that a Randite can only thrive as a lone hacker, or as undisputed tech god. It’s hard to see how a person who doesn’t respect the community can re-use code, or write good check-in comments (or comments designed to help other people at all), or worry about maintainability, or create a user-friendly UI, or write a really flexible API, or even fix bugs filed from outside Dev.  To do all those things well requires empathy– the ability to see things from another point of view, to value other people’s work and time, to realize that not all users of your product are fellow devs.


What devs should learn from

An alert reader asked what, as a software developer, I thought devs should learn from the fiasco.  It’s a good question!  I don’t know what systems are involved and of course have no insider information, but I think some general lessons are obvious anyway.

1. Program defensively. According to early reporting, the front end and back end were developed by different firms which barely talked to each other.  This is a recipe for a clusterfuck, but it could have been mitigated by error handling.

The thing is, programmers are optimists.  We think things should work, and our natural level of testing is “I ran it one time and it worked.”  Adding error checking uglifies the code, and makes it several times more complicated.  (And the error checks themselves have to be tested!)

But it’s all the more necessary in a complex situation where you’re relying on far-flung components.  You must assume they will fail, and take reasonable steps to recover.

An example: I managed– with difficulty– to create an account on in early October.  And for three weeks I was unable to log in.  Either nothing would happen, or I’d get a 404 page.  The front end just didn’t know what to do if the back end refused a login.

Oops, I’m anthropomorphizing again, as devs do.  More accurately: some idiot dev wrote the login code, which presumably involved  some back end code, and assumed it would work.  He failed to provide a useful error message when it didn’t.

Now, it’s a tricky problem to even know what to do in case something fails!  The user doesn’t care about your internal error.  Still, there are things to do:

  • Tell the user in general terms that something went wrong– even if it’s as simple as “The system is too busy right now”.  Be accurate: if the problem is that an external call timed out, don’t say that the login failed.
  • Tell them what to do: nothing?  try again?  call someone?
  • If you’re relying on an external call, can you retry it in another thread, and provide feedback to the user that you’re retrying it?
  • Consider providing that internal error number, so it can be given to customer service and ultimately to the developers.  Devs hate to hear “It doesn’t work”, because there’s nothing they can do with that.  But it’s their own fault if they didn’t provide an error message that pinpoints exactly what is failing.
  • In a high-volume system like, there will be too many errors to look at individually.  So they should be logged, so general patterns emerge.
  • Modern databases have safeguards against bad data, but it usually takes extra work on someone’s part to set them up.  E.g. a failed transaction may have to be rolled back, or tools need to be provided to find bad data.

2. Stress-test.  Programmers hate these too, partly because we assume that once the code is written we’re done, and partly because they’re just a huge hassle to set up.  Plus they generate gnarly errors that are hard to reproduce and thus hard to fix.

But pretty obviously wasn’t ready for large-scale usage on October 1… which means that the devs didn’t plan for that usage.

3. Provide tools to fix errors.  I called a human being later on– apparently human beings are cheap, I didn’t even have to wait long.  I explained the problem, and the rep (a very nice woman) said that they were using the same tools as the public– the same tools that weren’t working– so she couldn’t fix the problem.  D’oh.

Frederick Brooks, 38 years ago in The Mythical Man-Month, answered the common question of why huge enterprise systems take so long and so many people to write when a couple of hackers in a garage can whip up something in a week.  Partly it’s scale– the garage project is much smaller.  But there are two factors that each add (at least) a tripling of complexity:

  • Generalizing, testing, and documenting– what turns a program that runs more or less reliably for the hackers who made it into a product that millions of people can use.
  • A system composed of multiple parts requires a huge effort to coordinate and error-check the interactions.

Part of that additional work is to make tools to facilitate smooth operation and fix errors.  It’s pretty sad if really has no way of fixing all the database cruft that’s been generated by a month of failed operations.  There need to be tools apart from the main program to go in and clean things up.

(I was actually able to log in today!  Amazing!  So they are fixing things, albeit agonizingly slowly.  We’ll see if I can actually do anything once logged in…)

3. I hope this next lesson isn’t needed, but reports that huge teams are being shipped in from Google worry me: Adding people to a late project makes it later. This is Brooks again.  Managers are always tempted to add in the Mongolian hordes to finish a system that’s late and breaking.  It’s almost always a disaster.

The problem is that the hordes need time to understand what they’re looking at, and that steals time from the only prople who presently understand it well enough to fix it.  With a complex system, it would be generous to assume that an outsider can come up to speed within a month, and in that time they’ll be pestering people with stupid questions and generally wasting the devs’ time.

As a qualifier, I’ve participated in successful rescue operations, on a much smaller scale.  Sometimes you do need an outside set of eyes.  On the other hand, those rescues involved finding that the existing code base, or part of it, was an unrecoverable disaster and rewriting it.  And that takes time, not something that the media or a congressional committee is going to understand.

Second Life’s job

Slate has an annoying story on “Why Second Life failed,” which presupposes that SL failed.  I don’t think it did— except compared to the hype.  The authors compare it to Facebook— well, jeez, almost every Internet app is a failure compared to Facebook.

They also annoyingly illustrate the story with a video that showcases butt-ugly SL graphics from about 2006.  SL avatars have improved:

Slate's example (left); today's avatars (right; from an ad)

What’s undeniable is that SL hasn’t developed as Linden Labs expected it to.  Right, because LL is pretty clueless.  They clearly expected it to either be a social network, or a venue for business meetings.  But it was never any good as a social network, and that was clear years ago when all we had was texting.  All you need to network is text chat.  The SL interface— avatars, virtual spaces, logging into a dedicated app— just gets in the way.   You can get completely up to date on Facebook in the time it takes to log in to SL.  As for teleconferencing, this is a pretty niche market, but it’s perfectly well served by a webcam.  Honestly a phone call will do— I don’t need to see the pasty faces of my coworkers in another state or country.  If I do want to see them I want to see them, not an avatar that doesn’t show their gestures or facial expressions.

Also, their basic world model is broken.  I’ve spent many hours building maps for video games, and map designers are strongly encouraged to optimize rendering— mostly by limiting textures and line-of-sight visibility.  SL has to render a huge sprawling world, and as a result it’s slow and you can’t fit more than forty people in any one region.

LL’s main way to make money is to rent virtual land.  But the result is that, well, there’s way too much land in SL.  People build things and never use them, so people new to SL are confronted with a huge but desolate vista composed mostly of crap.

The article is trying to make a point that an app has to fulfill a “job”— i.e. meet some need, even if it’s a new kind of need.  But it fails to actually apply this test to SL.  SL does meet needs, just not those that LL thought it would.

  • It’s a great 3-D modelling program— the easiest one I’m aware of, which is why I recommend it in the PCK.  It’s fun to build things and you can show them off easily.  Of course this is going to be a niche market.
  • I know a lot of people who like to shop and decorate, and they support a surprisingly large marketplace for people who like to create things.  A better comparison is not to Facebook but to The Sims.
  • It’s great for roleplaying— with minimal work  you can create your own RPG populated by actual people rather than NPCs.  (LL has never quite known how to build this market as it includes a heavy dose of sex.)

What these things have in common is that they’re less like social networking and more like games.  From that perspective, what LL should do, I think, is allow modding the engine, in the way that Valve or Bethesda do.  They’ve already open-sourced the viewer; they should open-source the engine too.  Then people could create engines that support NPCs, or combat, or puzzle games, or really expansive explorable environments, or which improve avatar modelling and control.

Or to put it another way: people love 3-d virtual environments!  It’s a multi-billion-dollar industry!  But the LL (or Snow Crash) vision of fitting them all into one big multiverse doesn’t make much sense.

Word: Software development/devolution

So, the Uyseʔ grammar is up.  While doing this, I tried to be all 21st century and produce the HTML directly from a Word docx file.

It was awfully pretty… also awfully bloated… 1.1M.  I redid it the creaky old 20th century way; it’s about 200K.  Word’s HTML is full of crap, things like references to every font in your system, plus lots of information that is presumably there if you want to re-import it as a Word file.  Why, after all these years, isn’t there a “slim export” feature?

I still do most of my writing in Mac Word 5.1, dated 1992.  Partly this is because I’m so used to it I don’t have to think about most functions, but also because it’s blindingly fast.  Some of the newer Words were unusably slow.

I got Mac Word 2008 so I could write the LCK using Unicode.  On the whole it’s really good– it does such good PDF outputting that I didn’t need Acrobat; its indexing and cross-reference functions were a great time-saver; it can read Illustrator files; zooming in is very valuable.

At the same time, it’s a few steps forward, a few steps back.  The fact that it crashes occasionally is worrying.  It’s also slow, especially once you start using a lot of those advanced features.  And it just has a number of perverse features:

  • an “element gallery” bar you can’t turn off (my eyes aren’t what they used to be, I want to see a whole page as big as I can get it)
  • no overstrike mode that I could find (Word 5.1 had this)
  • Unusual Unicode characters appear in some random font; there seems to be no way to say “always use Gentium”, much less “if I insert a Gentium character ‘cos the default font doesn’t support it, that doesn’t mean I’m switching to Gentium throughout”
  • No options to nudge a picture; in general picture handling is clumsy (e.g. just getting one centered is tricky since when the cursor is in a picture it replaces the entire formatting pane, including the “center” control)
  • Word can’t do incremental saves any more, so saving a large document is slow

There’s a lesson here for software developers, but it’s probably so narrow that it doesn’t apply to much beyond Word.  Where do you go with a word processor?  Maybe every few years someone comes up with a killer feature you really want to add, but that’s not enough to get the masses to shell out $125 every three years.  So they have to keep redoing it anyway, changing the interface, adding stuff most people don’t need.  And along the way stuff that used to work just fine gets lost or broken.

(This probably doesn’t apply to most systems because there are always too many real features to add.  Though the point is similar to Joel Spolsky’s advice not to rewrite all your code as you’re dying to.)