Participatory debugging

Applications increasingly rely on components and services that can turn up in unexpected contexts. I wasn't expecting to see a Mac OS error code on my Windows box, but in a mix-and-match world that happens. Reporting the provenance of error codes would be a helpful best practice.

Enabling users to visualize configuration change would be even more helpful. The default path remembered in that dialog box is part of the application's configuration. When the problem arose, I asked myself the obvious question: "What changed?" But there was no way to compare the state of the application before and after. [Full story at InfoWorld.com]
This column zeroes in on two ideas for making it easier to help people figure out what's gone wrong with their software: provenance ("Where did the error come from?") and configuration ("What changed?").

The length limit on the print column precluded a third idea: communication. In the case described in this column, I wound up installing a specially-instrumented version of a program that captured a trace, then gathering up that log file and transmitting it to the developer. Seems a bit arcane in this day and age, doesn't it? The following scenario should be entirely doable:

That last step might be the most crucial. Where's the incentive, after all, to play the current version of this game -- that is, to agree to send your core dump to the vendor? Your report just vanishes into a black hole. You never find out what use was or wasn't made of the data you contributed, and there's no ongoing involvement. If the game were instead structured according to the architecture of participation, a lot of folks would get a kick out of helping to improve their software, irrespective of whether that software is commercial or open source.


Former URL: http://weblog.infoworld.com/udell/2006/08/23.html#a1512