After the PDC last week I went up to Palo Alto to give a talk at the Accelerating Change conference. This year's theme was Artificial Intelligence and Intelligence Amplification. The first speaker was Vernor Vinge1, a mathematician, computer scientist, and science fiction author. His 1993 essay, The Coming Technological Singularity, is widely cited as the first use of the term singularity -- the notion that superhuman intelligence will arise from our computer networks, or from our network-assisted shared consciousnesses, or from some Moore's-law-driven combination of these factors.
Vinge distinguishes between two scenarios he calls "soft take-off" and "hard take-off." In a soft take-off, the singularity occurs gradually enough so that humanity has a chance to adjust to the changes. This, everyone agreed, is what we want. A hard take-off, by contrast, is a near-instantaneous and cataclysmic event, like Vonnegut's ice-nine or the awakening of SkyNet in Terminator. This, everyone agreed, is what we probably don't want -- not that we'll have a choice, mind you. In either case, life on the other side will make no sense to beings left behind. If you were to bring Mark Twain forward from the mid-1800s to the present day, Vinge said, you could explain the current situation to him in an afternoon, and he'd very much enjoy what he would learn. But nothing at all could be made intelligible to a goldfish. That's us: just goldfish to the post-human super-intelligences.
There were tons of slides at Accelerating Change that showed exponential growth curves of one sort or another. So many that, early on, people began joking about the exponential growth of exponential growth-curve slides. The lion's share of these slides was shown by Ray Kurzweil, whose new book is The Singularity is Near.
Kurzweil has written on this topic before. In the waning days of the last millenium I reflected on his earlier book about the singularity, The Age of Spiritual Machines. My conclusion then was the same as it is now: maybe he's right, and maybe we'll leave our wetware behind in a couple of decades. Meanwhile, though, I figure we ought to work with what we've got: ordinary human intelligence, social systems, and networks.
In that vein, my talk was entitled Annotating the Planet. It elaborated on the idea, introduced in my Google Maps screencast, that we are turning the physical world into a Wiki, and that real landscapes are becoming virtual surfaces for collaborative annotation.
Before my talk, Scott Rafer -- who has recently jumped from Feedster to Wireless Ink -- opined that there is no strong AI, that there may not be in our lifetimes, but that many of the benefits we associate with AI will nonetheless accrue as collaborative tagging and filtering become ever more pervasive and efficient. A few vocal attendees weren't happy to hear that, just as they weren't happy in an earlier session to hear Google's director of search quality, Peter Norvig, reiterate his view that Google's "AI in the middle" isn't intelligence per se, but rather a clever mediation between intelligent authors and intelligent readers.
True machine intelligence was what the advocates of strong AI wanted to hear about, not the amplification of human intelligence by networked computing. The problem, of course, is that we've always lacked the theoretical foundation on which to build machine intelligence. Ray Kurzweil thinks that doesn't matter, because in a decade or two we'll be able to scan brain activity with sufficient fidelity to port it by sheer brute force, without explicitly modeling the algorithms.
Until the brain scanners arrive, though, we might as well roll up our sleeves and try to do some of that modeling. So I was particularly interested to hear from Bruno Olshausen, director of the Redwood Center for Theoretical Neuroscience, which is what Jeff Hawkins' Redwood Neuroscience Institute (RNI) became when Hawkins decided to put all his eggs in the Numenta basket.
Back in June I mentioned Hawkins' work, and reviewed his book On Intelligence. Oshausen comes out of that same milieu -- he was at Hawkins' RNI from the start -- and he elaborated on the theory described in the book. The example was a six-layered column in the neocortex connected to a 14x14-pixel patch of the retina. There are, Olshausen said, about 100,000 neurons in that chunk of neocortex. That sounds like a lot of circuitry for a few pixels, and it is, but we actually have no idea how much circuitry it is. That's because the type of neuron found in this cortical structure might just be a moderately complex adder, or it might be a more powerful processor than a Pentium. It seems amazing that we don't know where on that continuum such neurons lie, Olshausen said, but it's true: we don't.
We are, however, starting to sort out the higher-level architecture of these cortical columns. And it's fascinating. At each layer, signals propagate up the stack, but there's also a return path for feedback. Focusing on the structure that's connected directly to the 14x14 retinal patch, Olshausen pointed out that the amount of data fed to that structure by the retina, and passed up the column to the next layer, is dwarfed by the amount of feedback coming down from that next layer. In other words, your primary visual processor is receiving the vast majority of its input from the brain, not from the world. This is the kind of evidence from which, Hawkins and Olshausen think, we can begin to infer how the brain builds the model of the world that is the framework of intelligence.
Given all this heady stuff, I was afraid that my talk on collaborative mapping would make me look like a goldfish in a roomful of post-humans. But the talk seemed to be fairly well received.
I had to leave early to get to another event, so I missed the latter part of Saturday and all of Sunday, but not before I had the rare privilege of meeting Doug Engelbart. I'd listened to his talk at the 2004 version of this conference, by way of ITConversations.com, and was deeply inspired by it. I knew he'd invented the mouse, and had helped bring GUIs and hypertext into the world, but I didn't fully appreciate the vision behind all that: networked collaboration as our first, last, and perhaps only line of defense against the perils that threaten our survival. While we're waiting around for the singularity, learning how to collaborate at planetary scale -- as Doug Engelbart saw long ago, and as I believe we are now starting to get the hang of -- seems like a really good idea.
You can find more reactions to AC2005 by way of its Technorati tag. Also see the blogs of Scott Lemon and Evelyn Rodriguez, both of whom took extraordinarily detailed notes on many sessions.
1 Pronounced "vin-jee," I learned.
Former URL: http://weblog.infoworld.com/udell/2005/09/21.html#a1306