The last chapter of Malcolm Gladwell's Blink explores how we decode the language of facial expression. The syntax of that language boils down to a set of "action units" -- a facial action coding system that was first described in 1978 by Wallace Friesen and Paul Ekman. Some aspects of our facial language are under conscious control, Gladwell learns from Ekman, but others aren't:
If I were to ask you to smile, you would flex your zygomatic major. By contrast, if you were to smile spontaneously, in the presence of genuine emotion, you would not only flex your zygomatic but also tighten the orbicularis oculi, pars orbitalis, which is the muscle that encircles the eye. It is almost impossible to tighten the orbicularis oculi, pars orbitalis on demand, and it is equally difficult to stop it from tightening when we smile at something genuinely pleasurable. This kind of smile "does not obey the will," Duchenne wrote. "Its absence unmasks the false friend." [Blink]Reading this facial language is an unevenly-distributed skill. People who do it well seem to (and arguably can) read minds. People who it badly are socially handicapped -- perhaps even in a clinical way. But everyone can learn to do it better. For example, I have a psychologist friend, Larry Welkowitz, who uses canned videos to help his patients with Asperger's syndrome learn to recognize microexpressions. And that just scratches the surface of what's possible. It's fascinating, and more than a little spooky, to think about what might happen once we can easily record, transmit, and even transform the protocol that our faces are speaking.
While I was reading Blink I also happened to listen to Jeremy Bailenson's talk at Accelerating Change. Bailenson is the director of Stanford's Virtual Human Interaction Lab. (His audio segment, the second of three on a panel about distance infrastructure, starts at about minute 14 of the show and runs about 10 minutes.) This guy is into the deeply strange idea of "transformed social interaction." In the "collaborative virtual environments" he experiments with, people apply "strategic filters" to their avatars. I might morph some of your face into mine to become more appealing to you, or create a "supergaze" such that I seem to look directly at you but also seem to look directly at everybody else. Most radically, I might switch perspectives and become "a passenger in your head" in order to, for example, experience your point of view in a negotiation.
Last month I mentioned Martin Geddes' observation that we know very little about how telecom and transport can substitute for, or complement, one another. His distinction between "travel for sense of presence" and "travel for information exchange" becomes even more interesting when you think about the information content of live presence. When we talk about "looking the other guy in the eye," and when we use words like "chemistry," we're saying that critical communication signals can't cross the videoconference link. But what if some of the most critical signals -- the "facial action units" -- could be easily packetized?
Among close and trusting teams, this technology could make distance collaboration a lot more fruitful. As a way to form initial trust, though, it seems unlikely. How would I verify that the data you're sending me hasn't had a "strategic filter" applied to it? How would I make sure that the data I'm sending you doesn't tell you things that even I don't know?
I have no idea how we'll answer these questions. But we may find ourselves asking them sooner than I would have expected.
Former URL: http://weblog.infoworld.com/udell/2005/02/14.html#a1176