When I had dinner recently with InfoWorld Contributing Editor Phil Windley, he put his finger on something I've been trying to nail down for years. Like me, Phil works mainly in a home office, is married to a nongeek, and is often called on to deliver spousal tech support.The title of this column, The Tacit Dimension of Tech Support, refers to The Tacit Dimension, a 1967 book by the scientist/philosopher Michael Polanyi. One of his touchstone phrases was: "We know more than we can tell."
From his wife's perspective, Phil said, it looks like he knows how to do everything. But his own, subjective experience is very different. He doesn't really have detailed procedural knowledge of most tasks. He's just very good at discovering that knowledge.
"What I'm actually doing is figuring things out on the fly," Phil said. That's what all IT adepts do, all the time. We do it in such a rapid, fluid, and automatic way that we don't seem to be constantly learning or relearning. [Full story at InfoWorld.com]
I wish I'd read On Intelligence before, rather than after, I wrote that column. Written by PalmPilot and Treo architect Jeff Hawkins (with the help of science writer Sandra Blakeslee), this 2004 book chronicles Hawkins' shadow career as a neuroscientist searching for a unified theory of the neocortex. The model he proposes was inspired by the work of Vernon Mountcastle, whose 1978 paper An Organizing Principle for Cerebral Function provided Hawkins with this central insight:
Mountcastle points out that the neocortex is remarkably uniform in appearance and structure. The regions of cortex that handle auditory input look like the regions that handle touch, which look like the regions that control muscles, which look like Broca's language area, which look like practically every other region of the cortex. Mountcastle suggests that since these regions all look the same, perhaps they are actually performing the same basic operation! He proposes that the cortex uses the same computational tool to accomplish everything it does.
Hawkins coins the phrase "memory-prediction framework" to describe this common algorithm. Information, he argues, is always flowing both up and down the six layers of the neocortical stack. On the upstroke the neocortex processes raw sensory data and remembers patterns. On the downstroke it recalls those patterns, uses them to make predictions about the world, and compares those predictions with reality.
Neocortical memory is fundamentally different from computer memory, he says, in four ways:
The neocortex stores sequences of patterns. Another way to say this, although Hawkins doesn't put it in these terms, is that we understand the world in terms of stories that we remember and retell.
The neocortex recalls patterns auto-associatively. Memory isn't merely associative, linking patterns to patterns. It's also fractal, such that fragments can recall wholes.
The neocortex stores patterns in invariant form. When early AI researchers talked about "frames" and "scripts" they envisioned a centralized library of knowledge templates. In Hawkins' view, the codification and use of invariant patterns is occurring at all levels of the neocortical stack in a decentralized and massively parallel way.
The neocortex stores patterns in a hierarchy. Hawkins writes: "Each region of the cortex learns sequences [of patterns], develops what I call 'names' for the sequences it knows, and passes those names to the next regions higher in the cortical hierarchy."
My impression is that Hawkins' grand unified theory of intelligence has ruffled some feathers in the academic world, and it's easy to see why. What scientist wouldn't be jealous of a guy who can fund his own research institute and companion start-up firm? But as an armchair observer I can't wait to see how this turns out. The company, Numenta, says it's developing "a new type of computer memory system modeled after the human neocortex."
I'd like to think that such systems could augment human intelligence by helping us to codify and share what we tacitly know. But the core algorithms aren't restricted to human sensory inputs and motor outputs. Hawkins envisions a "weather brain" that would "think about and understand global weather systems as you and I think about and understand objects and people." Another example: a "power grid brain" that would experience and predict its domain in the same ways we experience and predict ours.
I have no idea if any of this will pan out. But it's great to see that Jeff Hawkins has the motivation, and the resources, to do the experiment.
Former URL: http://weblog.infoworld.com/udell/2005/06/22.html#a1255