When I look at today's Web, I see precious little metadata. We mine the scraps we have -- email addresses, URLs, HTML metatags -- for all they're worth. We know intuitively that with more and richer metadata, we could build more and richer applications. People much smarter than me imagine what it would be like if machines could "reason" about the things described with metadata. I'd love to see those people get the chance to do their experiment. So would Tim Bray, who also thinks the Web is "terribly metadata-thin" and has issued a challenge to produce a killer app for RDF (Resource Description Framework).
But there's a chicken-and-egg problem. You can't do the RDF experiment until there are interesting amounts of metadata floating around. But if we say that all metadata that can benefit from RDF must first be expressed in RDF, it's kind of a non-starter.
Why can't we first get a bunch of POM (plain old metadata) flowing through the system? Job feeds carrying metadata packets in namespaced payloads, for example? Once quantities of real data are in circulation, I'll bet the RDF gang could RDF-ify it, and then we can all learn -- finally -- what kinds of higher-order reasoning will become possible.
Meanwhile, the POM would be darned useful in its own right. The difference between an opaque unstructured job item and a transparent structured job item is night and day.
Former URL: http://weblog.infoworld.com/udell/2003/08/06.html#a768