Tangled in the ThreadsJon Udell, August 30, 2000
Revisiting dhttpDerek Robinson shares his clever in-situ HTML editor, and reminds us of the power of lightweight, peer-to-peer Web technology
A few years ago, I became fascinated with the possibilities inherent in peer-to-peer HTTP networking. I built a prototype system, and wrote about it in several installments of my Web Project column in BYTE. Because of the magazine's demise, these never appeared online, but I did publish a paper on the system that I called dhttp (for "distributed HTTP"). Here were the properties of dhttp that I found compelling:
Lightweight. The first prototype was small enough to fit on a single floppy. And most of its bulk was the Perl interpreter.
Simple. To install, you just unzipped a handful of files into a directory. To uninstall, you deleted them.
100% script. I wasn't the first to discover that a basic HTTP server is a simple thing, well within the capability of modern scripting languages. More recently Zope has demonstrated the power of an HTTP daemon made out of the same scripting language used to deliver Web services.
Symmetrical. HTTP services that are lightweight, simple, and fully scripted can become pervasive. The distinction between "big services out there in the cloud" and "little services here on my machine" starts to erode. Every machine can act simultaneously as a client and a server.
It was this last point -- the symmetry inherent in peer networking -- that thrilled me. It seemed to me that this had to change the world, though I wasn't sure exactly how that would play out. In the final chapter of my book I worked out more fully some of the intriguing possibilities of peer-to-peer HTTP networking: proxying, encryption, data replication. And I concluded as follows:
Like any powerful technology, this one's a double-edged sword. Wielded responsibly, it can enable all sorts of useful things. In the wrong hands, it can spell disaster. As with genetic engineering, there are two ways to respond to this dilemma:
Reject the technology. You might reasonably conclude that potential risks outweigh potential benefits. Peer-to-peer replication of code and data is inherently uncontrollable, therefore dangerous, therefore to be shunned.
Embrace the technology. You might also reasonably conclude that if peer-to-peer replication of code and data seems too simple and too powerful, then the correct response is to tap into the source of that simplicity and power, analyze the associated risks, and learn how to manage them.
Written in mid-1998 this was (if I do say so myself) a prescient observation. Two years later, Napster proved my point. The network really is the computer. The client/server mode of the original Web is only a degenerate form of the peer-to-peer mode that will characterize the next-generation Web. And as Napster is showing us, peer networking has disruptive effects.
A new use for dhttp?
Was dhttp ahead of its time? Perhaps. In any case, I hadn't thought much more about it until Napster brought peer networking into the mainstream. And then, this week, Derek Robinson dropped by my newsgroup to announce a really interesting dhttp-based project:
I'm a Perl novice, working on an 'in-situ' WYSIWYG-style HTML editor in JScript for IE5. No, it doesn't use the MS 'DHTML-Edit' component; yes, it's browser-specific but only uses innerHTML and the TextRange object + methods. (A version of 'TextRange' is included in W3C's DOM2 specification, while 'innerHTML' has been added to the latest Mozilla milestones, so the subset of the IE DOM it uses is as 'cross browser' as anything else out there these days -- i.e. not very!)
I've pushed JS Bookmarklets about as far as they can reasonably be taken towards on-the-fly/as-you-surf web-page editing; anything closer to a useful in-situ editor entails access to the host file system, which client-side JS prohibits.
It just needs to be able to write "<SCRIPT SRC='edit_page.js'>" into the head of (a copy of) the target page. Then the rest of what's needed for more-than-adequate HTML editing can be accomplished using client-side JS. The copy-paste can already be done with a bookmarklet, but only on local HTML files. I'm looking at writing a DHTTP plug-in app to accept the target page's 'location.href' URL from the link-bar bookmarklet, copy the page's HTML with the <SCRIPT SRC='edit_page.js'> patch, save the page to a local directory where the external JS file lives, then re-open the doctored doc in the same (or another) browser window.
The nicest feature is seeing your changes immediately redrawn in the original page. Note that if the selections get too big IE will hang, bummer! There are hints here for how to make the two apparently incompatible content-access schemes (TextRange vs. innerHTML) work together -- especially how to get rid of the spurious HTML tags that 'txt_range.htmlText' inserts in selections that go across elements -- which may be useful for anyone else wanting to try their hand at taking in-situ HTML editing 'beyond the TEXTAREA'.
Derek Robinson's in-situ HTML-editing bookmarklet
To install the bookmarklet in IE, do this:
Save the script to a file called, say, EditThisPage.html
Load that page.
Right-click the only text on the page -- the link labelled 'Edit' -- and select Add To Favorites. Name the bookmark something appropriate, such as EditThisPage.
Now let's see it in action. Here's a picture of this column, as I'm writing it, after clicking my EditThisPage bookmark.
1. Ready to edit a list element
Because the cursor sits on the first element of a list, that element is highlighted. Clicking the element loads it into the editing window, where I've changed "Lightweight" to "Small":
2. Performing the edit
Submitting the change rewrites the document (in-memory only, of course) like this:
3. Viewing the result.
Really cool idea, Derek! Thanks for sharing it with the newsgroup. This particular example is very much in the spirit of the first application that I wrote to demonstrate dhttp. It was a browser-based contact manager, and my original idea was really to leverage the standard modes of Web-style software development to create an application that road warriors could use equally well with or without a connection to the Net. It was only later that the peer-to-peer implications started to sink in. In a network of dhttp nodes, users could share data -- and services -- directly with one another. It wasn't so obvious, then, why that might matter. I think it's becoming a lot more obvious now.
Of course dhttp isn't the only, and I'm sure not the best, way to implement this idea. Last week, Sun announced its Brazil project. In an article about Brazil, Jon Byous writes:
The Brazil project started off as an HTTP stack designed with a very small footprint. It was originally intended to serve as a URL-based interface for smart cards, allowing them to be accessed from a web browser. Once the power of this simple Java technology-based code was understood, it evolved into a more general toolkit for putting URL-based interfaces on a wide range of applications and devices. More and more applications for the Brazil project continue to be explored and the limits for this technology have not yet been defined.
My own view is that HTTP (and, ideally, XML-over-HTTP) infrastructure ought to be a standard system component. As Derek's idea illustrates, there are uses for this technology that do not even involve networking. It can be really handy to leverage the standard modes of Web-style software development in order to create standalone applications. And when you do things this way, you never have to retrofit your application to make it Net-ready, because it always is. This idea -- that every application component is also, latently, a network service -- is enormously powerful. It's one of the things that makes Microsoft's .NET strategy so compelling. And, let's be candid, so scary. With Napster, we've only just scratched the surface. Admit it, you thought it was fun to download songs from other computers, but weren't you a bit shocked when someone reached across the Net to grab a song from your computer?
We're only beginning to comprehend the benefits -- and assess the risks -- of peer-style Web computing. I was pretty sure, two years ago, that in the long run the benefits would outweigh the risks. The jury's still out (no pun on Napster's case intended!), but I'm even more sure today that I was right. We don't yet know how to manage this technology but we'll have to learn. And we'll want to learn, because the network really is the computer, and peer services are at the heart of its operating system.
Jon Udell (http://udell.roninhouse.com/) was BYTE Magazine's executive editor for new media, the architect of the original www.byte.com, and author of BYTE's Web Project column. He's now an independent Web/Internet consultant, and is the author of Practical Internet Groupware, from O'Reilly and Associates. His recent BYTE.com columns are archived at http://www.byte.com/index/threads
This work is licensed under a Creative Commons License.