software


Slate has an annoying story on “Why Second Life failed,” which presupposes that SL failed.  I don’t think it did— except compared to the hype.  The authors compare it to Facebook— well, jeez, almost every Internet app is a failure compared to Facebook.

They also annoyingly illustrate the story with a video that showcases butt-ugly SL graphics from about 2006.  SL avatars have improved:

Slate's example (left); today's avatars (right; from an ad)

What’s undeniable is that SL hasn’t developed as Linden Labs expected it to.  Right, because LL is pretty clueless.  They clearly expected it to either be a social network, or a venue for business meetings.  But it was never any good as a social network, and that was clear years ago when all we had was texting.  All you need to network is text chat.  The SL interface— avatars, virtual spaces, logging into a dedicated app— just gets in the way.   You can get completely up to date on Facebook in the time it takes to log in to SL.  As for teleconferencing, this is a pretty niche market, but it’s perfectly well served by a webcam.  Honestly a phone call will do— I don’t need to see the pasty faces of my coworkers in another state or country.  If I do want to see them I want to see them, not an avatar that doesn’t show their gestures or facial expressions.

Also, their basic world model is broken.  I’ve spent many hours building maps for video games, and map designers are strongly encouraged to optimize rendering— mostly by limiting textures and line-of-sight visibility.  SL has to render a huge sprawling world, and as a result it’s slow and you can’t fit more than forty people in any one region.

LL’s main way to make money is to rent virtual land.  But the result is that, well, there’s way too much land in SL.  People build things and never use them, so people new to SL are confronted with a huge but desolate vista composed mostly of crap.

The article is trying to make a point that an app has to fulfill a “job”— i.e. meet some need, even if it’s a new kind of need.  But it fails to actually apply this test to SL.  SL does meet needs, just not those that LL thought it would.

  • It’s a great 3-D modelling program— the easiest one I’m aware of, which is why I recommend it in the PCK.  It’s fun to build things and you can show them off easily.  Of course this is going to be a niche market.
  • I know a lot of people who like to shop and decorate, and they support a surprisingly large marketplace for people who like to create things.  A better comparison is not to Facebook but to The Sims.
  • It’s great for roleplaying— with minimal work  you can create your own RPG populated by actual people rather than NPCs.  (LL has never quite known how to build this market as it includes a heavy dose of sex.)

What these things have in common is that they’re less like social networking and more like games.  From that perspective, what LL should do, I think, is allow modding the engine, in the way that Valve or Bethesda do.  They’ve already open-sourced the viewer; they should open-source the engine too.  Then people could create engines that support NPCs, or combat, or puzzle games, or really expansive explorable environments, or which improve avatar modelling and control.

Or to put it another way: people love 3-d virtual environments!  It’s a multi-billion-dollar industry!  But the LL (or Snow Crash) vision of fitting them all into one big multiverse doesn’t make much sense.

Advertisements

So, the Uyseʔ grammar is up.  While doing this, I tried to be all 21st century and produce the HTML directly from a Word docx file.

It was awfully pretty… also awfully bloated… 1.1M.  I redid it the creaky old 20th century way; it’s about 200K.  Word’s HTML is full of crap, things like references to every font in your system, plus lots of information that is presumably there if you want to re-import it as a Word file.  Why, after all these years, isn’t there a “slim export” feature?

I still do most of my writing in Mac Word 5.1, dated 1992.  Partly this is because I’m so used to it I don’t have to think about most functions, but also because it’s blindingly fast.  Some of the newer Words were unusably slow.

I got Mac Word 2008 so I could write the LCK using Unicode.  On the whole it’s really good– it does such good PDF outputting that I didn’t need Acrobat; its indexing and cross-reference functions were a great time-saver; it can read Illustrator files; zooming in is very valuable.

At the same time, it’s a few steps forward, a few steps back.  The fact that it crashes occasionally is worrying.  It’s also slow, especially once you start using a lot of those advanced features.  And it just has a number of perverse features:

  • an “element gallery” bar you can’t turn off (my eyes aren’t what they used to be, I want to see a whole page as big as I can get it)
  • no overstrike mode that I could find (Word 5.1 had this)
  • Unusual Unicode characters appear in some random font; there seems to be no way to say “always use Gentium”, much less “if I insert a Gentium character ‘cos the default font doesn’t support it, that doesn’t mean I’m switching to Gentium throughout”
  • No options to nudge a picture; in general picture handling is clumsy (e.g. just getting one centered is tricky since when the cursor is in a picture it replaces the entire formatting pane, including the “center” control)
  • Word can’t do incremental saves any more, so saving a large document is slow

There’s a lesson here for software developers, but it’s probably so narrow that it doesn’t apply to much beyond Word.  Where do you go with a word processor?  Maybe every few years someone comes up with a killer feature you really want to add, but that’s not enough to get the masses to shell out $125 every three years.  So they have to keep redoing it anyway, changing the interface, adding stuff most people don’t need.  And along the way stuff that used to work just fine gets lost or broken.

(This probably doesn’t apply to most systems because there are always too many real features to add.  Though the point is similar to Joel Spolsky’s advice not to rewrite all your code as you’re dying to.)

Since ideas like cloud computing are taking center stage, are arguments against open source losing ground?

Also is the current move toward the cloud a good thing for software or not?

—Joe Baker

My last job was in a SaaS company, so I’m familiar with some of the advantages.  It’s great for the seller— you get ongoing revenue instead of single sales; you can easily update all your customers— and it has advantages for enterprise customers: easily deployable, centrally manageable, presumably more reliable.

I think it makes the most sense for side apps— things like source control or survey software that you want to be widely available, but aren’t where most people spend most of their working hours.  For main apps, local teams, not the head office, should be able to choose the best tools.  If I’m spending most of my day using a tool, my team will make a better choice than some clueless IT autocrat.

I’m dubious about cloud computing in general, because there’s all this power in the desktop computer— why avoid it?  It mostly seems like an end run around Microsoft.  But if it works, it won’t produce the Open Source Utopia; it’ll produce a software world dominated by Google rather than Microsoft.

Also see Joel Spolsky’s delicious takedown of the architecture astronauts, particularly Microsoft’s version of cloud computing.

You mentioned Steam, which is an interesting model… it has cloud computing elements, in that your game permissions are stored externally (which makes it easy to change computers— a great boon as I’ve done it twice in the last year), yet the apps it manages are local desktop apps (which makes a lot more sense for games).  That’s a good balance, taking the advantages of cloud computing but not forcing it to do what it’s not good at.

I thought I’d write about the design process at SPSS, since it seems to be somewhat unusual in software shops.

Generally, as Alan Cooper has lamented, programmers design the products.  At SPSS there was a separate department responsible for design.  “Design” here means functionality plus UI.  They would write design documents specifying every screen and dialog, every error message, every command and what they did, every statistical procedure and how it worked.  (As preparation for this, they would decide on the functionality required, talk to users, and check out the competition.)

These would be reviewed in meetings, normally attended by all the stakeholders: sales, marketing, development, statistics, QA, documentation, tech support.  The designer led the meeting, and the best ones were good at keeping the meeting on track.  Meetings lasted no more than two hours (after that, people get bliffy); it it took longer another one was scheduled.

Anyone could criticize the design, and did.  However, there was a rule that the meeting shouldn’t fix designs, the designers should.  After an issue was raised the designers would go back and rework that part of the design.  Trying to design a feature in a meeting produces endless discussion and wastes most attendees’ time.

Most of the designers were smart and of course were experts in the product line, and the process normally worked well.  The weak spot was if anyone wasn’t really paying attention– people who hadn’t read the document ahead of time, or had little to say, or for whatever reason hadn’t bought into the product.  On the other hand, they didn’t have much of a leg to stand on if they didn’t like the product once it came out.

The designer continued to work closely with the development team, keeping the design up to date, helping to solve problems as they came up, doing usability tests, and participating in bug triage sessions.

Why would you do this?  Well, there are several advantages.

  • It’s a huge benefit to QA and doc writers, who know what the product is supposed to do long before they get any releases.
  • Designers can make sure different products are consistent, and as specialists they can produce better, easier UIs than programmers (who like arcane incantations). 
  • It formalizes the functionality– which helps keep your manager or the CEO from rushing in a month before ship and trying to slip in a new feature.
  • A good design document won’t just say what the program does, but why.   This helps avoid mistakes later, and educates everyone about what the product means to the end user. 

Does it add time to the development process?  It does take time, though the bulk of the work can be done while the dev team is finishing the last release, and designless products (as Frederick Brooks explained decades ago) end up late anyway.

For a tiny software shop, it may seem like a luxury– though I worked on a product at SPSS for years that had 4 to 6 programmers, and there was enough design work to have a full time designer. 

A much larger firm may have a much more elaborate design process; an advantage of the SPSS system is that it’s really not that complex.  The essential deliverable was a single design document; sometime there’d be a functional spec first.  When we had time, the developers would produce a internal design doc as well.  No one was drowning in paper.

Microsoft has a very strange pattern of alternately creating great and horrible things for programmers.  The Windows SDK was difficult; MFC was great; COM was a nightmare; C# is a dream; LINQ has gone back to the Dark Side.

Here’s my first working code, which took forever and probably could be done much better… with two books plus the web as reference, it was hard to do even this much.

SqlConnection conn = new SqlConnection(
   @"Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\TestDb.mdf;Integrated Security=True;User Instance=True");
SqlCommand comm = new SqlCommand("SELECT * FROM People", conn);
SqlDataAdapter adapter = new SqlDataAdapter(comm);
DataSet ds = new DataSet();
adapter.Fill(ds);

var results = from t in ds.Tables[0].AsEnumerable()
                          where !t.IsNull("Level")
                                && t.Field("Level") > 6
                          select t;
foreach (DataRow row in results) {
     this.ResultsLabel.Text +=
                    string.Format("{0}: {1}\n",
                    row["PersonID"], row["Name"]);
}

Ugh.  I appreciate the idea of adding SQL support directly to the language, but this really doesn’t seem easier than, say, using SQL, something we’ve been able to do for years.  Compare the equivalent SQL query, in fact:

SELECT PersonID, Name FROM People
WHERE Level IS NOT NULL AND Level > 6

Microsoft has gone to a whole lot of work to make something that kind of looks like SQL but isn’t. The compiler is much smarter in some ways (e.g. it will figure out many data types from context) but not enough– it seems entirely arbitrary what objects can be used in a LINQ query and which can’t. It’s particularly annoying since Visual Studio is smart enough to make it easy to create a database for you, and it will create some accessor objects for you… and then I have to mess around with connection strings and string column names in the code.  (Come to think of it, I’m making an actual SQL query in order to make a fake SQL query.  Why is this a good thing?)

I’ll get better at this soon; I’m just picking it up, so I’m in the ranting stage. But I wish the Light Side had been allowed to work on this. So far it looks like what Joel calls “Fire and motion“: Microsoft messing with things in hopes that they can move forward while everyone else is forced to catch up. If they’d made database queries easier, that would be cool, but so far I’d rather write the SQL.

I had some plans for a massive web page on software and software management, but it occurred to me that it’d be better to blog about it instead, in dribs and drabs. So here we go.  (I should mention that though I’ve worked in the industry for 25 years, it’s all been in software shops.  I recently interviewed with a huge megacorporation that can afford to have 150 people paid to do nothing but write architecture documents; that’s another world and I can’t help you there.)

Today’s question is, why can’t programmers write documentation?  Are they just lazy, or what? 

The brief answer is, it doesn’t match how they think.  Programmers are good at dividing a huge problem into little pieces, chunklets simple enough to be understood by a pedantic idiot, that is, a computer.  It already takes a special mind to be able to think in terms of the little pieces and to master the petty, arcane way the idiot insists on being addressed in.  It’s rare that the developer can talk to human beings as well.

If you corral the programmers into a room and force them to address the lack of internal documentation– something they’ll readily admit is a Bad Thing– then they’ll come up with a way to make documentation look like code, or come out of code.  They’ll look at something like this method signature

public void Deposit(decimal amount) 

and produce something like this:

Method name: Deposit
Return value: void
Description: Make a deposit
Argument list:
     amount    decimal    (input)    amount to deposit

In other words, their idea of documentation is to list out what’s obvious to any programmer reading the method signature.  This does nothing except create something else to maintain or more likely to get out of sync with the code, but to the programmer it has the advantage that he can write a program to generate the “documentation” automatically. 

In a small software shop this sort of thing will be done for one or two code files and then quietly forgotten.

What do you do instead? 

If you really want internal design documents– and they are nice for training other programmers, for debugging, for sharing techniques– then you have to make time for them.  Developers don’t include design or documentation in the estimates they give, and since their estimates are low anyway, they’re usually skipped.  So include the documents as deliverables and make time for them in the project plan.  (Often there’s downtime at the beginning of the project anyway while the honchos are arguing over what to do, so that’s a good time for this.  This is also when the code is freshest in people’s minds.)

At my last job we had a wiki for design, ideas, documentation, whatever.  That works– at least, it works better than paper.  In this area the perfect is the enemy of the good: creating good paper documentation is difficult and time-consuming and people tend not to do it.  Slapping the best bits from an e-mail chain into the wiki, by contrast, takes only a few minutes.

The best place for documentation, though, is in the code.  Good programmers should explain what the major methods do, what the class is for, and any tricky procedures or formats, right in the code.  The code is where programmers look anyway, and since the comments are right there, they’re more likely to be maintained.  And it’s in the source control system and requires no extra tools.

(How do you get even that?  The surest way is to do code reviews, which I’ll talk about another time…)

Joel Spolsky has redesigned his site, highlighting his best articles, divided by job title (developer, manager, UI designer, CEO, etc.).  It’s a good excuse to go read or re-read them.

http://www.joelonsoftware.com/index.html

Joel is very smart, though he hides this with an easy, humorous style, and his advice is always sensible– generally tossing cold water on panaceas and rigid systems.  I think he’s particularly good on management of software projects, but the site is like candy– you can just keep reading good, thought-provoking articles all day.

« Previous Page