Commodity Computing, Still the King


In my VC role I focus on technology investments – specifically Internet infrastructure and services (networking, servers, storage) for both the enterprise and service provider markets. Although not as hot as web2.0, there are a lot of startups building interesting technologies in this area. To filter potential investments further, I’ve recently taken an approach where I only look at companies that leverage commodity computing. In other words, if the startup is focused on Internet infrastructure and services and is basing it on proprietary computing, there is a very high bar to jump over before I become interested.

If you look at some of my previous posts here, I think you can see a pattern. CDNs leverage commodity computing for delivering Internet services, femto cellular products are going to connect to voice switches running on commodity hardware, Dell’s online backup and data protection is designed to offer personal service for their PC customers, Vyatta is using commodity computing to attack the proprietary networking vendors, and so forth.

In my opinion, for the foreseeable future, x86 commodity computing has won (I’m not an expert on Intel or AMD, so I’ll not choose a winner there). So, why shouldn’t Internet infrastructure and services leverage the large efficiencies of the commodity compute market that is being driven by the PC manufacturers? We all know about Moore’s Law and how compute power is getting cheaper and faster all the time, so why fight the feeling? In June 2005, Apple ditched the PowerPC processor and partnered with Intel and last week Sun, home of the proprietary Sparc processors, announced that they are also partnering with Intel.

Today, the x86 architecture features quad-core processors that are quite powerful and cost-effective. Recently, Intel announced research that is working on an 80-core chip that is basically a supercomputer on a chip. That is a huge amount of compute power coming to the commodity compute market over the next few years.

I’ll absolutely agree that x86 is not destined to be the only processor on the planet – the ARM does nicely in mobile devices and digital cameras. If you need to have several 10Gbps interfaces for a core Internet router then you need more than an x86 (maybe even use an Intel IXP network processor) and if you’re running a large financial system then you may want to pay up for a proprietary compute system. So, at the low and high end of the market, I suspect that there will be a need for proprietary processors to achieve desired performance. At the same time, I would point out that those segments of the market that are not being served by commodity computing appear to be shrinking. For example, I’m seeing commodity compute combined with virtualization (see Liquid Computing, Platform Computing and others) take market share from high-end proprietary platforms.

I hear arguments against commodity computing from companies looking for funding on a regular basis: the x86 is an old processor than is burdened with features and inefficiencies, the PCI bus is too slow, the x86 is expensive to operate because of heat and power concerns and that software needs to be modified to take full advantage of the multiple cores. While all of these are true to some degree in the near-term, these issues are being solved by Intel (see the New York Times coverage) and AMD, if not in real products then in marketing messages – and that might be enough to get the market to wait.

So, with all of that said, if I invest in a company that is in it’s early stages today, I’m willing to bet that by the time the company brings products or services to market that advances in processing power, new bus technologies and chassis solutions like ATCA will nearly eliminate the need for proprietary architectures.

The question I ask startups all the time is – if you can buy an 80-core x86 processor for a few thousand dollars in the near future, how do you plan on using that power to build better Internet infrastructure or services?

Allan Leinwand is a venture partner with Panorama Capital and founder of Vyatta. He was also the CTO of Digital Island.


Paul Bissett

Why stop the analysis on commodity computing at an 80-core x86 box? We started developing a Web 2.0 application for geospatial services using this commodity computing model. We found that for our needs (see it was going to require >$300K in servers, storage, and initial bandwidth purchases just to get off the ground. While our pro forma margins supported this, pro forma is still pro forma, and I would prefer not to spend initial cash into wasting assets.

We have since built our service on Amazon’s EC2/S3 services, and have layered our own solutions for stable IP addressing, fail-safe monitoring, auto-scaling, and load balancing onto their services. Our initial commodity computing costs are expected to decrease by at least 10X. The beauty is that we pay for only the storage, computing power, and data transfer that we need. The auto-scaling and load balancing solutions mean that we do not have to purchase resources ahead of need, and cash is deployed in the enhancement of services not in wasting assets.

For us, this is a better definition of commodity computing.

Geva Perry

Commodity computing has won on the PC end. But there are still challenges in achieving the same capabilities that one used to get from proprietary SMP architectures. Except now these challenges lie in the infrastructure software stack (OS, middleware, web servers).

Many applications have strict scalability, performance and reliability requirements that are non-negotiable — not just an issue of hardware costs.

So the challenge we are dealing with at GigaSpaces is how do you achieve these requirements with commodity hardware.

Geva Perry

Bubba Joe

For PC’s, yes x86 has won. But for volume, ARM has won.

For embedded (my area), x86 is a significant but certainly not dominant architecture – and I expect it will become less important over time.

OTOH, sticking to commodity-hardware (and I would include at least ARM and MIPS along with x86) looks like a good thumb rule for picking business plans in networking/servers/storage areas.

Final note, I suspect that Intel’s 80 core chip isn’t x86 compatible. The reports I’ve seen don’t say either way.


Allan… when you are talking about LAMP stack commodity computing makes sense..but when your business depends on the delivering the best possible computing performance then nothing can beat OS code optimized to the largest possible extent for a particular processor or hardware architecture. This is why Solaris for Spart and AIX for IBM power processors still rules the SPEC benchmarks.


John Powers: If windows had won then the LAMP stack would not have grown stronger day by day..mozilla would not have survived and would not be there..ipod would not come into existense..sun would not have shown quaterly profits for the last 2 quarters..mac userbase would not have and novell wouldnt have shifted their internal users to linux would not have exclusively plugged its application framework into the java .net would not have been a poor third or fourth in comparision to java,lamp,python or ruby..the list goes on and on.

Windows have NOT won but Windows is LOSING EVERYDAY!

Allan Leinwand

Hi Ajay – point well-taken. Maybe the antithesis to commodity and open compute is “closed compute” architectures?

Hi John – I agree that x86 being the winner should surprise no-one… you’d still be surprised how many startup companies think that they can beat Moore’s Law starting from scratch today.

Thanks for the comments guys!

John Powers

OK, x86 has won. I think that news surprises nobody. The other market observation that is equally obvious to many observers outside the Silicon Valley VC echo chamber is that Windows has also won.

So the true commodity computing play is finding a way to get tens or hundreds or thousands of x86s running Windows to work together. Why fight the feeling?

And now the shameless plug — check out one such solution at


In his rush to embrace commodity computing, Allan has wrongly named its antithesis ‘proprietary’ computing (perhaps hoping to align commodity computing with the open source software camp and their fight against proprietary software). This mistake is made glaringly obvious when he labels Sun’s Sparc CPU architecture as proprietary, when in fact it is one of the few open hardware architectures out there (along with the Power architecture, which he also lumps in with the ‘proprietary’ vendors by implication). Intel and AMD, the commodity vendors that he embraces, are both proprietary vendors. The term that he should be using instead of ‘proprietary’ is something like specialized or low-volume.

Comments are closed.