There’s an interesting discussion here on waterfall project management– answering the question “Why did it fall out of style?”

113 Mill

Good thing I didn’t have to find a picture of an agile

A couple people make the point that “waterfall” was invented as a straw man by Winston Royce. He claims that the main problem with the supposed method was that testing only occurred at the end of the process, so that redesign was prohibitive.  This is absurd; it’s on the level of saying that Agile projects never think in advance what they’re going to deliver.

The first answer on that page is basically party line nonsense. Dude complains that “5 years, tens of millions of dollars, hundreds of people, not a single user, the code never ran once”, and somehow we don’t do that anymore.

The best response to this is to go re-read Frederick Brooks’s The Mythical Man-Month, published back in 1975, a set of ruminations on why big projects go wrong which everyone has read and no one ever takes to heart. Note that Brooks’s book is an autopsy of several huge 1960s IBM projects which finished, but didn’t go well. Nowhere does he say the problem was that testing was put off till the end; Brooks advocated a thoroughness of testing and scaffolding that few developers could match.

The basic idea of waterfall programming is that you write a requirements document, then a full system specification, then an internal architecture, then code. I don’t think anyone ever has believed that these processes are hermetically sealed.

And when it’s done well, it works! We used to actually do all this when I worked at SPSS. There was a design department responsible for the specifications. They worked out all the functionality, every statistical function, every dialog box, and wrote it all down. Because they were a dedicated department, they could be experts in the problem domain and in user interface, and they could work pretty fast, and get going before the developers were fully staffed. The design document wasn’t just for the coders; it was for sales, management, documentation, and QA. All these people knew what was in the system, and could get going on their jobs, before the program was written. There were big meetings to review the design document, so everyone had a chance to offer input (and no one could say they were surprised at the design).  More on the SPSS process here.

So, waterfall works just fine… if you do it. The problems people perceived with the process are not due to something wrong with the model of design/code/test.  They’re generally due to not following the waterfall method, i.e.

  • not actually doing an external design
  • not actually doing an internal design
  • not even knowing the features when coding starts
  • not actually planning for testing
  • coders generally being pretty bad UI designers (see: Alan Cooper)

Now, maybe most workplaces and devs are hopeless, and can’t do the waterfall process, so they should do something else.  But it’s not really the case that the process “didn’t work”.  It works if you let it work.

As a cranky middle-aged old guy, I’d also answer the original question (“why did waterfall go out of style”) with “Fashion.”  Every ten years, someone comes around with a Totally New Programming Style. In the ’90s it was object-oriented programming; in the ’00s it was Agile. If you’re in your 20s or 30s, you can get very excited over the latest revolution and you’re eager to make it work.  If you’re in your 40s or 50s, you’ve seen several of these revolutions, and they never live up to their original hype.

This isn’t to say that there aren’t advances! I still think OOP was a pretty good advance; it made people think about design and re-use a lot more; plus, if done well, it had the huge advantage that any given routine was small, and thus more likely to be correct. (On the other hand, done badly, it produces huge swaths of nearly-useless layers and increases complexity.)

I haven’t actually worked on an Agile project (I write books now), so I can’t say if it works out well in practice.  From what I’ve heard, it has at least two fantastic ideas:

  • Keep the code buildable.  As developers are constantly breaking things, this seems like a great discipline to me.
  • Keep close (daily) contact with everyone. I’ve had too much experience with developers who were assigned a task and it was only discovered 2 or 6 months later that they couldn’t get it done, so the early warning also sounds great.

But the insistence on short sprints of work sparks my skepticism. There really are things that take a year to program. Yes, you can and should break them down into pieces; but those pieces will probably not be something deliverable.

I’ll give an example: one company I worked for did insurance rating software. This involved an incredible number of tiny variations. You’d have general rules for a line of insurance; then state variations; then variations over time; then variations between companies.  Our original method was to write a basic set of code, then write snippets of code that changed for each state/version/company. Writing all this took a lot of time.

Eventually I decided that what we needed was a specialized rating language– basically something that business analysts could handle directly, rather than coders.  It worked very well; 15 years after I left, the company was still using a version of my system.

The point, though: writing the language interpreter, and the proof of concept rules for one line of insurance, took time. Half a year, at least.  There was no way to divide it into two-week sprints with customer deliverables, and daily meetings would have just got in my way.

I can understand that doing things this way is a risk– you have to trust the programmer. On the other hand, I’d also point out that I took that project on after about six years at the company, and it was my third architecture for the system. (Brooks rightly points out how second projects go astray, as programmers do everything they ever dreamed of doing, and the project gets out of hand.)

 

 

Advertisements