home | archives & search | the shallows | nicholasgcarr.com

« The new narcissism | Main | A clean, well-ventilated place »

Is the server industry doomed?

February 27, 2006

Companies continue to buy a lot of server computers to run their applications and web sites and perform other routine computing chores. More than 7 million servers were purchased last year, according to estimates released last week by analysts, generating about $50 billion in revenues for server manufacturers like HP, IBM, Dell, Sun, and Fujitsu. The server market is a big one, and it looks fairly healthy at the moment. But that may be an illusion. There are growing indications that the server business is a dead man walking.

The most immediate threat is the twin trends of consolidation and virtualization. To save money, companies are merging their data centers and standardizing the applications they run throughout their businesses. The chemicals giant Bayer, for instance, has been consolidating its IT assets worldwide. In the U.S. alone, it slashed its number of data centers from 42 to 2 in 2002, in the process cutting its server count by more than half, from 1,335 to 615. With more companies embracing server virtualization - the use of software to turn one physical server into many virtual servers - the opportunities for further consolidation will only expand. Last year, for example, Sumitomo Mitsui Bank used virtualization to replace 149 traditional servers with 14 blade servers running VMware virtualization software. Timothy Morgan, editor of IT Jungle, believes that, as companies accelerate the consolidation and virtualization of their computing infrastructures, "the installed base of 20 million to 25 million servers in the world could condense radically, perhaps to as low as 10 million to 15 million machines."

But even that may, in the long run, be too rosy a scenario.

What if the future brings not simply a rationalized version of traditional computing, with fewer servers used more efficiently, but a fundamentally different version of computing, with little need for brand-name servers whatsoever? In this scenario, the core unit of business computing would not be small, inflexible servers but rather large, flexible computing clusters or grids. These clusters in turn would be built not from traditional branded servers but from cheap, commodity subcomponents - chips, boards, drives, power supplies, and so on - that the grid operators would assemble into tightly networked physical or virtual machines. Many of the functions and features built into today's branded servers would be taken over by the software running the cluster.

If you want to see a harbinger of this model of computing, just look at Google's infrastructure. Google doesn't buy any servers to run its search engine. It buys cheap, commodity components and assembles them itself into vast clusters of computers that it describes as "resembl[ing] mid-range desktop PCs." The computers run in parallel, using a customized version of the open-source Linux operating system. Google doesn't have to worry about "server reliability" - one of the main selling points used by server manufacturers - because reliability is ensured by its software, not its hardware. If, say, a processor fails, others pick up the slack until the faulty part is swapped out. What concerns Google is the big cluster and the little subcomponent; it's moved well beyond the idea of the branded server being the heart of business computing.

Obviously, Google is a unique company with idiosyncratic computing requirements. But its efficient, flexible, networked model of computing looks more and more like the model of the future. As Google engineers Luiz Andre Barroso, Jeffrey Dean and Urs Hoelzle write in their IEEE Micro article Web Search for a Planet, "many applications share the essential traits that allow for a PC-based cluster architecture." IT expert Paul Strassmann goes further in arguing that Google's infrastructure serves as a template for the future. "Network-centric systems," he says, "cannot be built on [traditional] workgroup-centric architecture." If large, expert-run utility grids supplant subscale corporate data centers as the engines of computing, the need to buy branded servers would evaporate. The highly sophisticated engineers who build and operate the grids would, like Google's engineers, simply buy cheap subcomponents and use sophisticated software to tie them all together into large-scale computing powerplants. Or, if they wanted to continue to purchase self-contained computer boxes, they'd simply contract with Taiwanese or Chinese suppliers to assemble them to their specifications, cutting out the middlemen and their markups.

Ultimately, we may come to find that the branded server was simply a transitional technology, a stop-gap machine required as the network, or utility, model of computing matured. I recently spoke to the chief executive of a big utility hosting company who expressed amazement that its largest server supplier seemed to be "in denial" about the profound shifts under way in business computing. Maybe it is denial. Or maybe it's just fear.

UPDATE: See further discussion here.

Posted by nick at February 27, 2006 12:45 PM

Comments

Sun Micro has made a big bet on building servers optimized for high density environemnts. If small enterprises outsource their hardware, more and more of the server hardware market will look like the big iron installations of Google.

I think Sun has the right idea but I question the logic of building the strategy around a new CPU (Niagra).

I agree server hardware will become increasingly commoditized, and the Taiwan/Chinese white box model will prevail.

It is also increasingly likely that the infrastructure will be located and maintained offshore using lower cost labor.

www.nyquistcapital.com/2005/12/10/sun-wants-to-change-the-planet/

Posted by: Andrew Schmitt at February 27, 2006 05:26 PM

A completely illogical argument. You have made a case for server costs coming down -"doomed" is a big word.

1. Hardware will become cheaper
2. Software will complement hardware in building resilient infrastructure.

Server is a transitional technology? In your next post, can you pls post five reasons why a components based architecture is better than a server based one, keeping the money part aside?

Remember: 1 + 1 = 10 only if the base is 2

Posted by: Srinivasan at February 27, 2006 07:41 PM

I do not know if too many organizations can do what Google has done, but it sure allows for a "hardware as a service" model...your vision about utility computing. But just like software incumbents pay lip service to software as a service and it has taken new entrants like salesforce.com to drive that, it may take some new players to offer "hardware as a service". A new product for Google?

Posted by: vinnie mirchandani at February 28, 2006 07:56 AM

Nick,

As you mention in this post, the number of servers sold is still enormous. In your slides, you briefly touched upon the fact that most companies still feel the need to own the infrastructure.

In your book "Does IT Matter?", you also said:

"The likelihood of an early investment in a new information technology truly paying off—something of a long shot to begin with, given the risk involved—gets ever slimmer as time goes by. Today, most IT-based competitive advantages simply vanish too quickly to be meaningful."

As a result, not many CEO/CIO/IT managers are looking to adopt the Utility Computing model, to avoid the risks of early adoption.

How can young companies (without the deep pockets and dito marketing budgets) overcome this problem?



Thanks,

Filip.

Posted by: Filip Verhaeghe at February 28, 2006 10:24 AM

The metrics that organizations should consider are
1. Costs per processing unit,

2. Volatility of computing demand.


In the long term, enterprises of the future will go with a limited number of highly consolidated servers running some virtualization software, yielding about 60%-70% in utilization, with additional computing power available on-demand to run complex statistical and marketing programs, provided by none other than OEM vendors like Sun, IBM or other providers using generic hardware.

Posted by: shiv at March 1, 2006 10:59 PM

An electric utility provides Watt-hours, a commodity. I apply that to my needs on my premises.

A computing utility provides logic, a commodity (whether sw or hw). I apply that to my data (a priceless artifact), but on whose premises?

Must I hand all my data off to the utility, so that I am essentially renting my own data as well as the logic?

If logic is truly a utility product, why can I not apply it to my needs on my own premises?

Posted by: Liam @ Web 2.5 Blog at March 2, 2006 01:35 AM

We are a big believer in virtualization and utility-computing services - we actively provide Virtual Machine / Virtual Server hosting as a key part of our hosted services (www.voicegateway.com).

However, based on customer feedback and our marketing efforts, our pragmatic view is that the traditional server/datacenter approach will continue to exist and grow in parallel.

This argument is more akin to the "thin client" versus "thick client" battles of the '90s. Time has shown that the pendulum continues to swing back and forth.

Ever since the early '60's timesharing systems we have seen this constant battle between distributed, individual resources (terminals, desktops, or servers) and massive centralized systems (timesharing, minicomputers, mainframes, network servers).

History has show the true market situation is a repeating sine wave alternating between both approaches.

New technology (in its' day) such as mini-computers, network computing, thin client enablers (Citrix, Windows Terminal Server, WinCE, "PC terminals") and centralized computing (virtualization, grid computing, blade servers) act as the catalysts that create the inflection points but the overall curve is a never-ending sine wave.

Posted by: Robert E Spivack at March 3, 2006 04:03 PM

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?


spacer Subscribe to Rough Type

Now in paperback:
spacer Pulitzer Prize Finalist

"Riveting" -San Francisco Chronicle

"Rewarding" -Financial Times

"Revelatory" -Booklist

Order from Amazon

Visit The Shallows site

The Cloud, demystified: spacer "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Greatest hits

The amorality of Web 2.0

Twitter dot dash

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

Is Google Making Us Stupid?

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force

Hypermediation

More

The limits of computers: spacer Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

What?

 
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.