The xpath search link on the left navbar has, for the past few months, led to a browser-based implementation which was cool, in that it worked locally in either MSIE or Mozilla, but cumbersome since it required you to first download an ever-growing pile of XML content. So, as part of my next couple of O'Reilly Network columns, I'm experimenting with a few different lightweight server-based solutions.
The first of these, currently wired to the xpath search link, is a minimal solution using Python's BaseHTTPDServer and libxslt. This implementation goes against a file containing the XHTML entries I've accumulated over the past 5 or so months, which amounts to about .8MB currently. It transforms that file with the same stylesheet used in the client-side solution. This seems to work snappily for queries that test for equality of attributes. Queries that only use contains() clauses take noticeably longer. Ordinarily I wouldn't find that surprising. However if you compare with the client-side solution, you'll see that there, even contains() queries, using MSIE (the MSXML processor) or Mozilla (the Transformiix processor) are instantaneous. I'd have thought libxslt would give similar results on a similar quantity of data, but evidently not.
The next version, which I'm still refining, uses Berkeley XML DB. So far, it looks like it delivers great performance on all queries -- if I split the entries out into individual records.
The point of this exercise is to continue to explore and reveal the structural possibilities inherent in simple XHTML/CSS content. It also makes a nice interactive XPath demo. Note that currently if you write an invalid query you just get a general error. I'll try to improve that with more specific feedback.
Former URL: http://weblog.infoworld.com/udell/2004/01/10.html#a883