Recently I wrote about a commercial NLP project that bit off more than it could chew. In response an alert reader sent me a fascinating paper by Ami Kronfeld, “Why You Still Can’t Talk to Your Computer”.  It’s unfortunately not online, and Kronfeld is sadly no longer with us, but it was presented publicly at the International Computer Science Institute, so I figure it’s fair game.

Kronfeld worked for NLI (Natural Language Incorporated), which produced a natlang interface to relational databases. The project was eventually made part of Microsoft SQL Server (apparently under the name English Query), but it was allowed to die away.

It worked pretty well— Kronfeld gives the sample exchange:

Does every department head in Center number 1135 have an office in Berkeley?
[Answer: “No. All heads that work for center number 1135 are not located in an office in Berkeley”]

Who isn’t?
[Answer: Paul Rochester is the head not located in an office in Berkeley that works in center number 1135]

He points out that language and relational databases share an abstract structure: they have things (nouns, entities) which have properties (adjectives, values) and relate to one another (verbs, cross-references). This sort of matchup doesn’t always occur nicely.  (E.g. your word processor understands characters and paragraphs, but it hasn’t the slightest idea what any of your words mean.)

But the interesting bit is Kronfeld’s analysis of why NLI failed. One aspect was amusing, but also insightful: we humans don’t have a known register for talking to computers. For instance, one executive sat down at the NLI interface and typed:

How can we make more money?

The IT guys reading this are groaning, but the joke’s on us. If you advertise that a program can understand English, why be surprised that people expect that it can understand English?

Curiously, people attempting to be “computery” were no easier to understand:

Select rows where age is less than 30 but experience is more than 5

This seems to be an attempt to create an on-the-fly pidgin between SQL and English, and of course the NLI program could make nothing of it.

Of course there were thousands of questions that could be properly interpreted. But the pattern was not obvious. E.g. an agricultural database had a table of countries and a table of crops.  The syntactic template S grow O could be mapped to this— look for S in the country table, O in the crops— allowing questions like these to be answered:

  • Does Italy grow rice?
  • What crops does each country grow?
  • Is Rice grown by Japan?
  • Which countries grow rice?

But then this simple question doesn’t work:

  • Does rice grow in India?

Before I say why, take a moment to guess.  We have no trouble with this question, so why does the interface?

The answer: it’s a different syntactic template.  S grows in O is actually the reverse of our earlier template— before, the country was growing things, here the rice is growing, all by itself, and a location is given in a prepositional phrase. As I said before, language is fractally complicated: you handle the most common cases, and what remains is more complicated that all the rules you’ve found so far.

Now, you can of course add a new rule to handle this case.  And then another new rule, for the next case that doesn’t fit.  And then another.  Kronfeld tells us that there were 700 separate rules that mapped between English and the database structure.  And that’s one database.

So, the surprising bit from Kronfeld’s paper is not “natural language is hard”, but that the difficulty lives in a very particular area: specifying the semantics of the relational database. As he puts it:

I realized from the very start that what was required for this application to work was nothing short of the creation of a new profession: the profession of connecting natural language systems to relational databases.

So, that’s a way forward if you insist on having a natlang interface for your database!  NLP isn’t just a black box you can tack on to your program. That is, parsing the English query, which is something you could reasonably assign to third-party software, is only part of the job.  The rest is a detailed matchup between the syntactic/semantic structures found, and your particular database, and that’s going to be a lot more work than it sounds like.



At Mefi, someone linked to this very interesting account of failure at a programming startup. There’s a lot to say about Lawrence’s story, starting with the fact that, now 40 years on, dev shops still don’t understand The Mythical Man-Month. Also, that Agile does not give you the magical ability to skip the step where you work out the architecture and internal APIs of your app.

But I want to focus on this: “The idea is brilliant: Natural Language Processing as an interface to interact with big Customer Relationship Management tools such as SAP.”

I’m’a let you finish, but no, it’s not brilliant.


Instead of the user clicking a button to edit old Contacts, they would type, “I want to edit the info about my contact Jenny Hei,” and we would then find the info about Jenny Hei and offer it to the user so they could edit it. That was the new plan.

This was a brilliant idea. Salespeople hate software. They are good dealing with people, but they hate buttons and forms and all the other junk that is part of dealing with software.

People always think that the ideal program would understand English, so all you had to do is talk to it about your problem, and it goes and does it.

At least one early example is Asimov’s “Second Foundation”, from 1949. A teenage girl, Arkady, is using a word processor to type a paper (which happens to give us the exposition for the story).  But she’s interrupted by a real world conversation, the conversation gets recorded in the paper, and she apparently has no way to edit or delete the extra comments– the paper is ruined and she has to start over. What a horrible UI!

Let’s go back to Lawrence’s example.  Posit, for a moment, that the UI works as intended: you can type

I want to edit the info about my contact Jenny Hei.

and the app gets ready to do just that. Awesome, right?

Yes, the first time. When the alternative is looking over an unfamiliar program to find the Contacts button… let’s not even talk about having to watch Youtube tutorials to learn how to use the program, as I’ve had to do with Blender… then just speaking an English sentence sounds very attractive.

The second time, that’s OK too.  The tenth time, especially ten times in a row… you’re going to wonder if there’s a better way.  The thousandth time, you’re going to curse the programmer and his offspring to the third generation.

If you’re a programmer… is this how you want to program?  Do you normally write in COBOL?


Admit it, you thought it was pretty neat when C let you say


rather than

a = a + 1

If you actually had to use Lawrence’s interface, you’d breathe an enormous sigh of relief if someone installed a mod that let you type


And you’d be even happier if the mod allowed you to hit the Contacts button, type J in the search box which is automatically enabled, and hit enter.

It’s not that interfaces can’t get too arcane!  You can get a lot done if you know EMACS really well… but for most people it’s about as easy to master as quantum mechanics.  A WYSIWYG word processor is much nicer.  But notice that we don’t edit by saying






Who has time to type all that?  Or say it, for that matter?

Would you want to drive your car that way?  No, for the same reason you wouldn’t want to drive it with the WASD keys.  Spoken language is just not very precise.  (Do you think you could direct a robot on how to change lanes?  First, how do you communicate exactly how far to turn the wheel?  Second, you probably don’t know how yourself— only your cerebellum knows.)

And this is all assuming that you can program a computer to understand spoken commands. Lawrence’s team evidently didn’t realize that what they were asked to do was implement an AI, at a level that has never been done.

What you probably can do is a more forgiving COBOL. That is, you create a toy world, not unlike Terry Winograd’s SHRDLU, and work out an English-like code to manage it which is rigid in its own way, but happens to recognize a lot of keywords in different orders.  For instance, maybe it can handle all of

I want to edit the info about my contact Jenny Hei

Edit the record under Contacts for Jenny Hei

Search for Jenny Hei in Contacts.

Find Jenny Hei using the Contacts file and let me edit.

Good work!  Now are you quite sure you also allowed these?

I should like to modify the particulars about Jenny Hei, a contact.

Get me Contacts; I’m’a edit Jenny Hei’s record.

Change Jenny Hei’s name to Mei.  She’s under Contacts.

That record I added yesterday.  Let me change it.

Lemme see if I… I mean, I need to check the spelling… just let me see the H’s, OK?  Oh I’m talking about the ‘people I know’ feature.

Is there a Hei in the Contacts thingy?  It might be Hai.  First name is Jennifer.  Did I record it as Jenny?

Natural language is hard. It’s fractally hard. You may be able to interpret simple sentences– like in all those Infocom games of the ’80s– but actual language just throws on construction after construction.  Linguists have been writing about English syntax for more than sixty years and they’re not done yet.

And that’s before we even get into incomplete or ambiguous queries! The user leaves off the key word “Contacts”, or isn’t clear if they’re adding or editing, or gives the name wrong, or gives the name right only it’s recorded wrong in the database, or gives all the edits before saying what they apply to, or the name in question sounds like a command, or the user is malicious and insists that the contact’s name is Robert’); DROP TABLE students;– …

The more you produce the illusion that your app is intelligent, the more users will assume it’s way more intelligent than it is. And when that fails, they will be just as annoyed and frustrated as if they had to learn to push the Contacts button in the first place.

I know a bunch of people are jumping up and down and saying But Siri! Well, first, Google “siri fails”… this could eat up your whole morning. Siri is quite impressive (and has a megacorporation behind its extensive programming), but it has a relatively limited domain (the basic apps on your phone), and also– so far as I know, I don’t have an iPhone– it can’t get deeply into trouble, so its errors are funny rather than devastating.

One of programmers’ oldest dreams, or snares, is to write an interface that’s so simple to use that the business analyst can write most of the app.  I’ve fallen for this one myself, more than once!  The sad truth is even if you do this task pretty well, non-programmers aren’t going to be able to use it.  To program, you have to think like a programmer, and this doesn’t change just because you make the code look like English sentences. I’ve addressed this before; the basic point is, it doesn’t come easily to non-programmers to think in small steps, to remember all the exceptions and hard cases before they come up, or to understand the data structure implied by a process.

Again, this isn’t to say that most app UIs are OK.  Nah, they’re mostly horrible.  But a) people will learn them anyway if they have to, and b) improving them is almost never a matter of making them more like natural language.

Lawrence notes that “salespeople hate software”, and I’m sure he’s right. However, he focuses on the “forms and buttons”, as if these were the sticking point.  They’re not.  Salespeople like making money– Joel Spolsky joked that they’re like The Far Side‘s Ginger


except that instead of only hearing “Ginger” they only hear “money”. Which is great! Software companies need people to go out and make sales. But salespeople are not jazzed about doing paperwork, or database work, which includes editing the contact page in SAP for Jenny.

The irony is that Lawrence, later in the article, runs into exactly the same situation with other developers, but doesn’t make the connection. What devs hate doing is documentation. Lawrence wants his fellow devs to keep a couple pages in the wiki up to date with their APIs, and they just won’t do it, unless he nags them to death.  Is this a UI problem, as Lawrence thinks SAP has?  No, it’s a motivation problem, or a mental skillset problem, or something… and whatever it is, it’s even harder than natural language programming.

(All this doesn’t mean a natural language interface would never be a good idea.  Though come to think of it, large companies that only handle voice recognition for their customer service phone number… that sucks.  They’s too slow and they fail at recognition half the time.  So if you’re a programmer writing an app used by the executives of those companies, that’s when you write a program that requires spoken natural language input.)

Edit: One more thought. Talking about editing the contact list presupposes that the user understands “editing” and “the contact list”. In this context, this is supplied by SAP itself: the customers can be presumed to understand that application’s processes and categories. Right?  It’d be interesting to know how close the customers’ user model is to the actual workings of the product. (Hint: don’t assume it’s very close.)

If a user agent was really smart enough to understand English, I’d expect it to be smart enough to fill in gaps. Do you need to specify “Contacts” if there is only one Jenny Hei in any name field?  If there’s only one Jenny under Contacts, can I leave out “Hei”? Can I define my own categories and tags and use those?  If I were talking to a human, I could say “I’m meeting Jenny on the 18th” and they’d break that down into steps: find the table Jenny is in, add a note about the meeting, find the calendar, add the appointment, set up an alarm for the 17th.  If your app can’t do all this, you don’t have “natural language processing”, you have a verbose but limited special command language.


We finally got a new computer for my Dad.  His old one, which was more than 10 years old, was slow and generally horrible, and I decided it was finally a quality of life issue: it was just too painful to check e-mail; plus it would randomly turn itself off…

The new one is an “All-in-one”, which turns out to mean the computer itself is part of the monitor. Technology, it’s amazing.  It was about $310… there was a $330 one too, and the sales guy won some honesty points for admitting that there was no difference between them.


What he needs, basically

Dad is struggling manfully to adapt to the new system.  You don’t fully recognize, till you set up a system for a 94-year-old, how much computers do to confuse 94-year-olds.  Such as:

  • Changing everything for no apparent reason. You can drive a car from 2004 with no difficulty, but developers think that everything must be done differently. I got him a copy of Word, and it’s like Martian Alien Word all of a sudden. The File menu goes to a completely different screen… whose idea was that? The basic needs for editing a document hasn’t changed, but they’ve messed around horribly with the interface.
  • Pop-ups from virus checkers and the computer manufacturer; required updates from Windows. All things that make the computer do unexpected things he doesn’t know how to respond to. Some of them show (shudder) the Metro screen.
  • Windows is better at keeping programs working than the Mac, but still, his old photo software doesn’t work on the new machine. Fortunately Windows itself is able to get pictures off his camera. (I will probably have to do this for him, but that’s OK– I wasn’t sure it’d work at all.)
  • Not enough options for large type. I switched to a larger Windows font, but it’s still pretty small for him.

This probably makes him sound worse off than he is. He’s a smart guy; in his ’80s, when he got the old computer, he read up on it and figured it out. But it takes him extra time to learn new things, even seemingly simple things like “the favorites menu now lives on the right side of the window.”  We recently got him watching DVDs on the computer, and he’s figured it out except that he never remembers that space bar will start/stop the show.

I’m aware that there are “old-people computers” that supposedly simplify the main tasks old folks want to do. But even those would be basically a new operating system he’d have to learn, plus I don’t know if he could open his old Word documents. Plus they’re kind of expensive.

Anyway, my point is, if you’re young enough, this amount of learning new things is not bad, and can even be fun and exciting. When you’re my Dad’s age, novelty for the sake of novelty is just baffling; it’d be better if things just worked as they always did, only faster.

I read an article on New Scientist [link requires free registration] a couple of days ago about programming languages. The writer thinks most of them were poorly designed, that is, hard to learn, hard to use, and difficult to debug. He said that there were about 15 to 50 errors per thousand lines of code, and huge systems like Windows accumulated masses of them. “As more and more of the world is digitised”, the problem will get worse, with the potential for fatal accidents in areas such as aviation, medicine, or traffic. One solution is “user-friendly languages” which let the programmer see what they do “in real time as they tinker with the code”. Another is to design programs that write themselves, based on google searches.


So I’ve got a couple of questions for the guru.


One, what is your opinion of the article, the problem, and the suggested solutions, as an expert?

And for my second question, what qualities make a programming language good or bad? Can you rank the languages you’ve used in order of usefulness? Or is that a pointless endeavor? Are there any which are stellar, or any which you wouldn’t advise anyone to mess with?


The article is a bit misleading, I think. Brooks gives some examples of geeks trashing computer languages, but it should be understood that the geek statement “X IS TOTAL SHIT IT SHOULD DIE IN A FIRE” just means “X was about 95% what I like, but the rest disappointed me.” 

Like any geek, I love the opportunity to trot out my opinions on languages. When I was a lad, Basic was supposed to be easy, so everyone learned it.  It’s total shit and should die in a fire.  That is, it was disappointing. The early versions gave people some very bad habits, such as not naming variables, using numeric labels, and GOTOing all over the place— most of these things are fixed now.  Fortran is honestly little better; Cobol adds tedium for no good reason.  A lot of modern languages— C, C++, C#, Java, Javascript— are surprisingly similar, and inherit their basic conventions from Pascal.  I liked Pascal a lot (haven’t seen a compiler for it in twenty years), and I like C# almost as much. I haven’t used Ruby or Python, but looking briefly at code snippets, they look a lot like (cleaned-up modern) Basic. An experienced programmer can always learn a new language, and how crappy their programs are depends on them, not the language.

There are, of course, lots of little stupidities that have caused me a lot of problems. To take one at random, C uses = and == with different meanings, and it perversely uses == for simple equality. Pascal got this right. There are also amazingly clever bits in languages that I’d hate to do without (data hiding, for instance).   

One thing the article misses is that what’s really a pain to learn is not the mechanics of the language, but the libraries and UI frameworks.  The C family and Java are very similar, but the libraries aren’t, and that’s what will take you months to pick up.  (Unless you use .NET, which is designed to use the same library across multiple languages, so the languages themselves become a set of syntactic skins you can pick by personal preference.)

Programmers have realized before how tedious and error-prone their work is, and there have been many attempts to help, including:

  • Smarter development environments, like Visual Studio. These take care of indenting for you, they’ll check for mismatched braces and such, keywords are highlighted. You can rename a variable program-wide, or break out a section of code as a separate routine, or insert commonly used code fragments, with one command. This not only saves time, but keeps you from making common errors.
  • New paradigms— as when we switched from procedural to object-oriented programming about twenty years ago, or to Agile about ten years ago. When you’re in your 20s you get really excited about these revolutions. Crusty middle-aged people like me are a little more jaded— these methodological changes never quite live up to the hype, especially as they don’t address the management problems identified fifty years ago by Frederick Brooks: too much pressure to make impossible deadlines with inadequate tools.  (Which isn’t to say change is bad.  Object-oriented programming was an improvement, partly because it allowed much better code re-use, and partly because if it’s done right, routines are much shorter, and shorter code is more likely to work. But good lord, I’ve seen some horrifying OO code.)
  • Higher levels of abstraction. This is largely what the New Scientist article is talking about.  Earlier forms of the idea include specialized string processing languages (Snobol), simulation languages (Simula), and database specs (SQL). When I was doing insurance rating, I created an insurance rating language. Someone always has a vision of programming by moving colored blocks around or something.

A lot of programming is incredibly repetitive; all programmers recognize this. The bad programmer addresses it by copying and pasting code, so his programs consist of endless swaths of similar-but-confusingly-different routines. The good programmer addresses it by abstraction: ruthlessly isolating the common elements, handling common problems the same way (ideally with the same code), making UI elements consistent, moving as much detailed behavior as possible out of the code itself into high-level specifications. All the libraries I mentioned are just earlier programmers’ prepackaged solutions to common problems.

Often the idea is to come up with something so powerful and easy to use that it can be given to the business analyst to do. (that is, the non-programmer who’s telling the programmer how the real-world thing works).  This usually doesn’t work, because

  • the programmer’s idea of “easy” is not that of ordinary people, so the business analyst can’t really use the tools.
  • most people don’t have the programmer’s most important learned skill: understanding that computers have be told everything. Ordinary humans think contextually: you remember special case Y when Y comes up.  Programs can’t work like that– someone has to remember Y, and code for it, long before Y happens.

The reason that programming takes so long, and is so error-prone, is that no one can work out that everything all at once, in advance. The business analyst suddenly remembers something that only happens every two years on a full moon, the salesman rushes in with a new must-have feature, the other guy’s system doesn’t work like his API says, field XYZ has to work subtly differently from field WXZ, we suddenly discover that what you just wrote to the database isn’t in the database, no one ever ran the program with real data.  Abstraction in itself will not solve these problems, and often it introduces new problems of its own— e.g. the standardized solution provided by your abstraction vendor doesn’t quite work, so you need a way to nudge it in a different direction… 

Again, I don’t mean to be too cynical. When it’s done well, code generators are things of beauty— and they also don’t look much like code generators, because they’re designed for the people who want to solve a particular problem, not for coders. An example is the lovely map editing tool Valve created for Portal 2.  It allows gamers who know nothing about code or 3-d modeling to create complicated custom maps for the game. Many games have modding tools, but few are so beautifully done and so genuinely easy.

But I’m skeptical that a general-purpose code generation tool is possible.  One guy wants something Excel-like… well, he’s right that Excel is a very nice and very powerful code generator for messing with numbers. If you try using it for anything more complicated, it’s a thing of horror.  (I’ve seen Excel files that attempt to be an entire program. Once it’s on multiple worksheets, it’s no longer possible to follow the logic, and fixing or modifying it is nearly impossible.)

The other guys wants to “allow a coder to work as if everything they need is in front of them on a desk”.  I’m sure you could do some simple programs that way, but you’re not going to be able to make the sort of programs described earlier— an aircraft software suite, or Microsoft Word.  You cannot put all the elements you need on one desktop. Big programs are, as the author notes, several million lines of code.  If it’s well written, that probably means about 40,000 separate functions.  No one can even understand the purposes of those 40,000 functions— it takes a team of dozens of programmers.  Ideally there’s a pretty diagram of the architecture that does fit on a page, but it’s a teaching abstraction, far less useful— and less accurate— than an architect’s plan of a house. (Also the diagram is about two years out of date, because if there’s anything programmers hate more than other people’s programming languages, it’s documentation.)

So, in short, programmers are always building tools to abstract up from mere code, but I expect the most useful ones to be very domain-specific. Also, lots of them will be almost as awful as code, because most programmers are allergic to usability.

Plus, read this. It may not be enlightening but it should be entertaining.


When I hear about libertarians who want to seastead, the jokes, like the sea-waters bursting a wall built by sub-minimum-wage contractors, just flow.  It’s impossible not to think about Rapture.

Leading the world in waterproof neon tubing

Leading the world in waterproof neon tubing

Nonetheless it’s interesting to read Charlie Loyd’s take on the idea. Loyd lived for years on a geographic anomaly– Waldron Island, in the Salish Sea between Washington State and Vancouver. The island has no stores, no public transport to the mainland, and about a hundred residents. So he groks the appeal of isolation (and islands).

At the same time, having actually done it, he’s aware, unlike the Randian isolationists, of just how much he depends on a vast interconnected human community. When you’re the last link in the supply chain– when you have to physically haul your water and groceries and gas out of the boat– you become more aware of what a complex monster it is. Randites don’t realize that they already live in Galt’s Gulch– that they live in a highly artificial island where the people who build and maintain it have been airbrushed out of the picture.  Moving to a physical island would actually decrease their isolation; they’d be confronted by their dependence on a billion other people.

He talks a fair bit about Silicon Valley dudebros, and it makes me wonder if anyone has attempted to correlate political views with code quality. Of course, you can despise people and write good code… indeed, development is an excellent field for people who hate people!  But can you despise community and write good code? I’d suspect that a Randite can only thrive as a lone hacker, or as undisputed tech god. It’s hard to see how a person who doesn’t respect the community can re-use code, or write good check-in comments (or comments designed to help other people at all), or worry about maintainability, or create a user-friendly UI, or write a really flexible API, or even fix bugs filed from outside Dev.  To do all those things well requires empathy– the ability to see things from another point of view, to value other people’s work and time, to realize that not all users of your product are fellow devs.


An alert reader asked what, as a software developer, I thought devs should learn from the fiasco.  It’s a good question!  I don’t know what systems are involved and of course have no insider information, but I think some general lessons are obvious anyway.

1. Program defensively. According to early reporting, the front end and back end were developed by different firms which barely talked to each other.  This is a recipe for a clusterfuck, but it could have been mitigated by error handling.

The thing is, programmers are optimists.  We think things should work, and our natural level of testing is “I ran it one time and it worked.”  Adding error checking uglifies the code, and makes it several times more complicated.  (And the error checks themselves have to be tested!)

But it’s all the more necessary in a complex situation where you’re relying on far-flung components.  You must assume they will fail, and take reasonable steps to recover.

An example: I managed– with difficulty– to create an account on in early October.  And for three weeks I was unable to log in.  Either nothing would happen, or I’d get a 404 page.  The front end just didn’t know what to do if the back end refused a login.

Oops, I’m anthropomorphizing again, as devs do.  More accurately: some idiot dev wrote the login code, which presumably involved  some back end code, and assumed it would work.  He failed to provide a useful error message when it didn’t.

Now, it’s a tricky problem to even know what to do in case something fails!  The user doesn’t care about your internal error.  Still, there are things to do:

  • Tell the user in general terms that something went wrong– even if it’s as simple as “The system is too busy right now”.  Be accurate: if the problem is that an external call timed out, don’t say that the login failed.
  • Tell them what to do: nothing?  try again?  call someone?
  • If you’re relying on an external call, can you retry it in another thread, and provide feedback to the user that you’re retrying it?
  • Consider providing that internal error number, so it can be given to customer service and ultimately to the developers.  Devs hate to hear “It doesn’t work”, because there’s nothing they can do with that.  But it’s their own fault if they didn’t provide an error message that pinpoints exactly what is failing.
  • In a high-volume system like, there will be too many errors to look at individually.  So they should be logged, so general patterns emerge.
  • Modern databases have safeguards against bad data, but it usually takes extra work on someone’s part to set them up.  E.g. a failed transaction may have to be rolled back, or tools need to be provided to find bad data.

2. Stress-test.  Programmers hate these too, partly because we assume that once the code is written we’re done, and partly because they’re just a huge hassle to set up.  Plus they generate gnarly errors that are hard to reproduce and thus hard to fix.

But pretty obviously wasn’t ready for large-scale usage on October 1… which means that the devs didn’t plan for that usage.

3. Provide tools to fix errors.  I called a human being later on– apparently human beings are cheap, I didn’t even have to wait long.  I explained the problem, and the rep (a very nice woman) said that they were using the same tools as the public– the same tools that weren’t working– so she couldn’t fix the problem.  D’oh.

Frederick Brooks, 38 years ago in The Mythical Man-Month, answered the common question of why huge enterprise systems take so long and so many people to write when a couple of hackers in a garage can whip up something in a week.  Partly it’s scale– the garage project is much smaller.  But there are two factors that each add (at least) a tripling of complexity:

  • Generalizing, testing, and documenting– what turns a program that runs more or less reliably for the hackers who made it into a product that millions of people can use.
  • A system composed of multiple parts requires a huge effort to coordinate and error-check the interactions.

Part of that additional work is to make tools to facilitate smooth operation and fix errors.  It’s pretty sad if really has no way of fixing all the database cruft that’s been generated by a month of failed operations.  There need to be tools apart from the main program to go in and clean things up.

(I was actually able to log in today!  Amazing!  So they are fixing things, albeit agonizingly slowly.  We’ll see if I can actually do anything once logged in…)

3. I hope this next lesson isn’t needed, but reports that huge teams are being shipped in from Google worry me: Adding people to a late project makes it later. This is Brooks again.  Managers are always tempted to add in the Mongolian hordes to finish a system that’s late and breaking.  It’s almost always a disaster.

The problem is that the hordes need time to understand what they’re looking at, and that steals time from the only prople who presently understand it well enough to fix it.  With a complex system, it would be generous to assume that an outsider can come up to speed within a month, and in that time they’ll be pestering people with stupid questions and generally wasting the devs’ time.

As a qualifier, I’ve participated in successful rescue operations, on a much smaller scale.  Sometimes you do need an outside set of eyes.  On the other hand, those rescues involved finding that the existing code base, or part of it, was an unrecoverable disaster and rewriting it.  And that takes time, not something that the media or a congressional committee is going to understand.

Slate has an annoying story on “Why Second Life failed,” which presupposes that SL failed.  I don’t think it did— except compared to the hype.  The authors compare it to Facebook— well, jeez, almost every Internet app is a failure compared to Facebook.

They also annoyingly illustrate the story with a video that showcases butt-ugly SL graphics from about 2006.  SL avatars have improved:

Slate's example (left); today's avatars (right; from an ad)

What’s undeniable is that SL hasn’t developed as Linden Labs expected it to.  Right, because LL is pretty clueless.  They clearly expected it to either be a social network, or a venue for business meetings.  But it was never any good as a social network, and that was clear years ago when all we had was texting.  All you need to network is text chat.  The SL interface— avatars, virtual spaces, logging into a dedicated app— just gets in the way.   You can get completely up to date on Facebook in the time it takes to log in to SL.  As for teleconferencing, this is a pretty niche market, but it’s perfectly well served by a webcam.  Honestly a phone call will do— I don’t need to see the pasty faces of my coworkers in another state or country.  If I do want to see them I want to see them, not an avatar that doesn’t show their gestures or facial expressions.

Also, their basic world model is broken.  I’ve spent many hours building maps for video games, and map designers are strongly encouraged to optimize rendering— mostly by limiting textures and line-of-sight visibility.  SL has to render a huge sprawling world, and as a result it’s slow and you can’t fit more than forty people in any one region.

LL’s main way to make money is to rent virtual land.  But the result is that, well, there’s way too much land in SL.  People build things and never use them, so people new to SL are confronted with a huge but desolate vista composed mostly of crap.

The article is trying to make a point that an app has to fulfill a “job”— i.e. meet some need, even if it’s a new kind of need.  But it fails to actually apply this test to SL.  SL does meet needs, just not those that LL thought it would.

  • It’s a great 3-D modelling program— the easiest one I’m aware of, which is why I recommend it in the PCK.  It’s fun to build things and you can show them off easily.  Of course this is going to be a niche market.
  • I know a lot of people who like to shop and decorate, and they support a surprisingly large marketplace for people who like to create things.  A better comparison is not to Facebook but to The Sims.
  • It’s great for roleplaying— with minimal work  you can create your own RPG populated by actual people rather than NPCs.  (LL has never quite known how to build this market as it includes a heavy dose of sex.)

What these things have in common is that they’re less like social networking and more like games.  From that perspective, what LL should do, I think, is allow modding the engine, in the way that Valve or Bethesda do.  They’ve already open-sourced the viewer; they should open-source the engine too.  Then people could create engines that support NPCs, or combat, or puzzle games, or really expansive explorable environments, or which improve avatar modelling and control.

Or to put it another way: people love 3-d virtual environments!  It’s a multi-billion-dollar industry!  But the LL (or Snow Crash) vision of fitting them all into one big multiverse doesn’t make much sense.

Next Page »