Upwards of ten inches of rain hit southwestern New Hampshire over the weekend, flooding the towns -- including Keene, where I live -- to a level not seen in 20 years. My house was OK, so on Sunday morning I grabbed a camera, toured around on my bicycle, shot some video, and made a screencast to document what happened locally.
Compared to the New Orleans flood -- not to mention the awful devastation wreaked on Pakistan over the weekend -- this event barely moves the needle. But it did give me a chance to experiment with map-enhanced video, and to imagine what the future of that might be like.
I can envision shooting documentary video with a networked and GPS-equipped camera. The video feed is location-coded as well as time-coded. Computer-based players use the location data to render maps -- and to animate routes -- that complement the display of the video. For straight video playback, the maps are woven into the stream using a default effect that's tunable in post-production.
This would be great in any situation, but would be especially valuable when trying to stitch together the big picture of a disaster scene -- both for near-realtime coordination, and for post-facto analysis.
Today, this is a painful process which requires a bewildering array of tools and techniques. Nobody could make one of these things easily or routinely. But I'll bet that's going to change.
Former URL: http://weblog.infoworld.com/udell/2005/10/10.html#a1318