During my session at BloggerCon I referred to Apple's famous Knowledge Navigator concept video. I first saw that video in 1988. Today I tracked down a copy and watched it again. It stands the test of time rather well! Certain elements of that vision are now routine -- for example, Google found me the video and WiFi delivered it to a PowerBook which, when equipped with its iSight camera, bears a family resemblance to the Dynabook-like talking computer featured in the video. Other aspects are still far out of reach, especially the conversational interface based on deep understanding of natural language.
Clearly natural language is taking a lot longer than the pioneers expected. Back in 1953 researchers thought it was going to be a five year project. No-one in 2003 is so optimistic. In other respects, though, important elements of the Knowledge Navigator vision seem within reach. At one point, the fictional Professor Bradford tries to recall a paper he read five years before, in which a Dr. Flemson, he misremembered, disagreed with the direction of a colleague's research on deforestation. "John Fleming, of Uppsala University," the computer replied. "He published in the Journal of Earth Science in July 2006." Google's not quite there yet, of course, but it's helpful modification of failed queries is a step in the right direction.
The next bit is more fanciful. "Fleming challenged Jill's prediction about the amount of carbon dioxide released due to deforestation," says Prof. Bradford. "I'd like to recheck his figures." "Here's the rate of deforestation he predicted," says the computer, displaying a chart. "Mm hmm," says Bradford, "and what really happened?" The computer overlays the actual data, showing significant variance from Fleming's prediction. It's a stretch, but we can at least imagine how to pull something like this off today. Fleming's data would be in XML; the software would infer a schema from it; a query to a Web service would yield the actual reported data; transformation would correlate the two data sets for display on a common surface.
Presence, attention management, and multimodal communication are woven into the piece in ways that we can clearly imagine if not yet achieve. "Contact Jill," says Prof. Bradford at one point. Moments later the computer announces that Jill is available, and brings her onscreen. While they collaboratively create some data visualizations, other calls are held in the background and then announced when the call ends. I feel as if we ought to be further down this road than we are. A universal canvas on which we can blend data from different sources is going to require clever data preparation and serious transformation magic. The obstacles that keep data and voice/video networks apart seem more political and economic than technical.
Apple's vision, in any case, was and is spot on. I wonder how much closer to reality it will be in another fifteen years.
Former URL: http://weblog.infoworld.com/udell/2003/10/23.html#a831