Utility Computing
Plug and Pay
Quelle: CIO, USA
Utility computing: Who would have thought that a technology with such a pedestrian label would become a top IT story?
During the past two years, most of the leading IT services companies have announced initiatives with that unprepossessing name. All the products and services sold under that banner appeal to a common vision: computing tasks buying what they need and only what they need, automatically, from a huge pool of interoperable resources(potentially as large as the whole Internet). Each task or transaction would have an account and a budget and would run up payables; every resource would record and collect receivables. Computing power would be as easy to access as water or electricity. While the products and services currently being introduced under utility computing do not go this entire distance, they move a long way in that direction.
Consider American Express Executive Vice President and CIO Glen Salow's situation. Like many companies, when AmEx introduces a new product, that action typically triggers traffic surges back onto the enterprise network. Some of that traffic will support marketing efforts, some technical support and some the service itself, such as executing an online transaction. It is critical that adequate resources be in place to support that service, particularly during the early days of an introduction. Yet calculating ahead of time what this demand surge will be is almost impossible.
To date, all a CIO could do was overprovision, but as Salow points out, that imposed a double penalty: paying more than was technically necessary and waiting for the new equipment to be installed and tested. "I don't want to tell marketing that I need six months to have the infrastructure in place," he says. So Salow took a different approach and structured a deal with IBMIBM Global Services to buy storage and processing for delivery over a network, per increment of traffic demand. That is not utility computing in the purest sense, since resource procurement is not calculated automatically or per transaction. But the term still applies because of the much tighter fit it allows between the provisioning and demand curves. The advantages of utility computing are self-evident: Resource use becomes more efficient, and because resource changes are automatic or at least highly automated, it also conserves management time. By contrast, the current system - in which IT hooks up and exhausts large blocks of resources in a general free-for-all, at which point another large block is trucked in and wired in place - looks antediluvian. On paper, at least, the case for the transition to utility computing seems compelling. Alles zu IBM auf CIO.de
A.K.A. Outsourcing?
Unfortunately it is very hard to get from here to there. Companies assemble current systems out of silos of resources, which they then fine-tune to local operating requirements. Some of those resources sit inside the firewall and some outside; some run under Unix and some under Windows; and some are PCs and some are Macs. "Suppose an application is qualified on Solaris 8," says Peter Jeffcock, group marketing manager for Sun Microsystems. "Finding a processor running Solaris 7 will not be helpful." He compares imposing utility computing on the average network to trying to build an electrical power market if every state generated a different brand of electricity.