A couple of articles on the "why software sucks" theme came my way recently. First this one in Wired :
Buggy software is creeping into systems where failure can't be dismissed with curses and a sigh. Consider: Darpa is using wearable computers designed to beam tactical information to the "data visors" of combat troops. The devices run Windows 2000, an OS so flawed that its bug-cleansing "service packs" run to 100 Mbytes.
I'm no Microsoft apologist, but that's hardly fair. Win2K is in my experience the stablest NT since 3.51. The fact that its components are coarse-grained, and that service packs are therefore large, says nothing in itself about quality. But in any event, here's the punch line:
There's always the American way: Unleash the lawyers. At the moment, shrink-wrap licenses and clickthrough agreements shield software makers from damage claims - even if they broke it, you bought it. Just as the legal fallout from exploding Pintos shamed Detroit, exposing software to class-action lawsuits might induce Silicon Valley to code more cautiously.
The same theme appears in a Technology Review story by Charles C. Mann. (Incidentally, the Tech Review site asks $4.50 for the piece, but MSNBC doesn't.)
The real problems lie in software’s basic design, according to R. A. Downes of Radsoft, a software consulting firm. Or rather, its lack of design.
Really? I've seen intensive design lead to great success but also spectacular failure. Here, again, comes the appeal to the lawyers:
The lawsuits will eventually come. And when the costs of litigation go up enough, companies will be motivated to bulletproof their code.
I have no doubt that we need, and will get, more accountability, and that litigation will play a role. Likewise, I have no doubt that Bruce Schneier is right when he says that insurance will play a growing role in the security business.
For better and worse, though, we're entering the age of distributed software services. Accountability is becoming more diffuse. If we've done a bad job with the software that we fully control, how do we deal with the emergent behavior of software that we don't fully control -- that's just a piece of a larger puzzle? It's not wrong to demand as much accountability as we can get. We call software interfaces "contracts" and they might literally come to mean that in a legal sense. But lawyers can't solve the whole problem. We really are going to need some new tools and techniques for managing distributed complexity. The folks who are approaching this problem from the biological perspective are, I suspect, on the right track.
Former URL: http://weblog.infoworld.com/udell/2002/08/06.html#a370