"We will probably see the spread of 'computer utilities', which, like present electric and telephone utilities, will service individual homes and offices across the country." [Len Kleinrock, quoted by Ian Foster in GRIDtoday]
But computing isn't like electricity, Foster says. When you buy electricity from the grid, the best possible outcome is reliable and cheap electricity. In a computing grid, though, the sum can be greater than the parts. Virtualization and distributed management of services present different kinds of opportunities. The upside is more than, and more interesting than, CPU cycles on demand.
There are dozens of grid projects in the scientific/technical/research space. The "Linux of the grid" -- as Foster calls the Globus Toolkit -- is widely used. The Global Grid Forum is the grid world's IETF, and its March 2003 meeting in Tokyo brought 850 people together. How does all this scientific stuff intersect with business? Here are the common requirements shared between "eScience and eBusiness":
To link dynamically acquired resources,
into a virtual computing system,
that delivers multi-faceted quality of service for demanding workloads
Grid standards are actively evolving. The Globus Toolkit is one implementation of the Open Grid Services Architecture. "It's a service-oriented framework for grid technologies." Everything can be treated as a service with a well-defined interface, using Web services standards, focusing in particular on problems of distributed management. "A framework for the definition of composable, interoperable services." The Web services model is about describing, discovering, and invoking services. But it does not speak about how they are created -- it just assumes they exist. In the grid context, it's necessary to support "transient service instances," which are created and destroyed dynamically, and whose resources are freed when not needed. "It's quite hard to get right," Foster says. Much of OGSA is concerned with sorting out this problem in middleware.
The Globus Toolkit implements core OGSI interfaces. (The Open Grid Services Infrastructure is the nuts-and-bolts spec that talks about how to actually do the things that OSGA describes at a high level.) It's an open source project, spearheaded by Foster's team at the Argonne National Laboratory, and the software is currently running at about 25K downloads a month.
Here's an interesting point. Foster compares scientific to industrial deployments. The former are far-flung and loosely-coupled because, of course, that's what scientific collaboration always is. The latter occur, so far, mainly within enterprises. This, of course, is the pattern we see with Web services deployments: do it behind the firewall first, collaborate across firewalls later. It's interesting that we still regard trans-corporate collaboration as an optional afterthought. That won't be possible for much longer.
A guy from CenterBeam asks what's distinctive about "transient service instances." Good! That would have been my question too. Foster reiterates: it's about creating services as needed, and destroying them when not needed. More concrete examples would help, but it's intuitively easy to understand what this means. And it ties back to the meat of the message. Business, for the most part, is not compute-bound. It is, however, quite seriously collaboration-challenged. Grid technology isn't just a way to amass horsepower, it's a way to orchestrate the dynamic assembly of resources. If you think business does that well, watch the chaos that ensues as we waste the first fifteen minutes of every meeting futzing around with the conferencing equipment. For scientists, far-flung collaboration has never been optional. I'm sure they can teach us some tricks.
Former URL: http://weblog.infoworld.com/udell/2003/03/31.html#a652