Visualizing Googlespace

I've received a number of interesting responses to my current column on the Google API. Nelson Minar, former Popular Power CTO who's now with Google, wrote to point out that since HTML doctitles are of uncertain quality, the snippets (relevant text chunks) returned by the Google API might form an interesting search space. When I tried that, though, the results didn't seem to diverge enough. The name of the game, in this kind of surfing, is to chart a course through googlespace that diverges enough to turn up interesting new connections, but not so much as to end up off in the weeds.

So far, titles seem like the happy medium between URLs and snippets. But I'm not done yet. I have a hunch that things will get really juicy when the evaluation function is augmented by something outside of Google. This kind of cross-wiring is what most excited me when I did my original mindshare experiment. When the activation threshold for these experiments is low, because lots of services have standardized APIs, we'll see (I am sure) some compelling emergent behaviors.

Another reader, Martin Spernau, pointed me to a graphical Google browser based on the TouchGraph framework. At the TouchGraph site, one of the demos of the framework browses a taxonomy of graph-based interfaces. Such visualizations never fail to fascinate me. But I confess that they don't hold my interest for very long, either. I suspect that the reason is the same: the evaluation functions are too simple, and too isolated. Consider this interactive map of the Net industry. According to its creator, Valdis Krebs:

The data is gathered from various public sources and includes only data on business partnerships such as strategic alliances and joint ventures.

It's easy to imagine an evaluation function that's sensitive to the sizes of the business entities, the durations of the partnerships, the job-posting patterns in various cities, and a million other things. Visualizing these dimensions would be a worthy challenge for a graph-based interface. Semantic web visionaries suppose that all this metadata will be encoded in standard ways, and will then unlock these powerful evaluation functions. I think it's going to happen the other way around. Web APIs will surface whatever meager bits of metadata already exist. Applications will combine these APIs to create novel effects. Then metadata will be incrementally improved in order to enhance the effects.

Former URL: