27 Comments

Summary:

Written by Alistair Croll, vice president of product management and co-founder of Coradiant Virtualization and on-demand computing are giving companies new reasons to worry about code efficiency. Once upon a time, lousy coding didn’t matter. Coder Joel and I could write the same app, and while […]

Written by Alistair Croll, vice president of product management and co-founder of Coradiant

Virtualization and on-demand computing are giving companies new reasons to worry about code efficiency.

Once upon a time, lousy coding didn’t matter. Coder Joel and I could write the same app, and while mine might have consumed 50 percent of the machine’s CPU whereas his could have consumed a mere 10 percent, this wasn’t a big deal. We both paid for our computer, rackspace, bandwidth, and power.

Joel’s code wasn’t measurably “better” than mine (or vice-versa) as long as the apps were the same to the end user. Any advantages were hidden by the step function of physical hardware: Computing costs didn’t grow linearly with the amount of processing consumed.

Modern applications, however, are changing in several important ways:

  • One, virtualization lets applications scale across multiple machines. Many companies are consolidating their server infrastructures, and decommissioning hundreds of machines in the process. A 2006 Yankee Group study of 700 firms estimated that 76 percent had already deployed server virtualization in the data center or planned to do so.
  • Two, power is the limiting factor for many data centers. A typical Google (GOOG) data center — which puts 10,000 computers into 30,000 square feet — is likely located in a Wet State near a power source. For example, the new Google data center site in The Dalles, Ore., was chosen largely for its proximity to hydroelectric power.
  • And three, Software-as-a-Service platforms let us run sophisticated applications on someone else’s infrastructure. Salesforce.com’s (CRM) recently unveiled Force platform is a good example of this, and Amazon’s (AMZN) EC2 and S3 provide lower-level computing on demand.

These three changes mean that bad code matters. Now, with ten instances of my application installed in a data center, I’m using five machines — while Joel only needs one. I’m five times as bad for the planet as Joel.

This hits my wallet, too: Amazon’s Elastic Computing service charges 10 cents per processing hour, plus storage and bandwidth costs, for a “typical” server.

Inefficiency doesn’t just come from writing bad code. Modern applications are written with several tiers of abstraction. The latest web 2.0 app is a layer cake of complexity: Adobe (ADBE) Flex, within an AJAX framework, dynamically rendered by a Java app that’s running inside a monitoring layer like Glassbox that’s loaded on a Sun (JAVA) JVM, that’s running on a virtual OS, which is managed by a VMWare (VMW) Hypervisor.

That’s a lot of distance — and computing overhead — between my code and the electricity of each processor cycle. Architecture choices, and even programming language, matter.

To anyone who’s worked on mainframes, this should look familiar. Administrators relied on tools like IBM’s (IBM) Workload Manager to measure processing usage in shared environments, and billed usage back to a company’s departments. But where mainframe operators had lots of instrumentation, in today’s environment each layer is hidden from those beneath it. This dramatically limits visibility.

We have a common language for most of the variables behind an application: gigabytes of storage, vertical inches of server space, kilowatts consumed, and so on. But we don’t have a good way of talking about processing workload. Some applications have their domain-specific metrics — for example, Microsoft (MSFT) Exchange uses megacycles per mailbox — but there’s no universal term for describing efficiency across the myriad platforms and frameworks of web 2.0.

In 2004, Michael S. Malone argued that we need to think about the overall efficiency of an electronic system, rather than a simple doubling of processing power.

As we move towards shared, on-demand infrastructure, we need to find ways to talk about “green” code. Until then, we’re at the mercy of bad coders and heavy applications.

Alistair Croll is a co-founder of Coradiant. He writes about online user performance on Coradiant’s corporate blog and tries to out-guess the future at bitcurrent.com

You’re subscribed! If you like, you can update your settings

By Alistair Croll

You're subscribed! If you like, you can update your settings

Related stories

  1. Tiny Planet » Bad code is bad for planet Earth Saturday, October 13, 2007

    [...] Well worth a read, and you can find it here. [...]

  2. Alister -

    You’re only partially correct. Amazon’s EC2 uses fixed size virtual machines that are roughly half a cpu core in size so your code and Joel’s would still cost the same to run and would still have the same impact on the environment.

    This is why at 3tera we allow users to provision resources to as little as 1% of a cpu core and 32MB of memory. On our platform your argument would be true.

  3. Time for Software Code to go Green « GigaOM Sunday, October 14, 2007

    [...] lastly, Software as a Service, has gone mainstream. These three changes mean that bad code matters. Continue Reading to find out why. Share This | Sphere | Print Posts | Topic: Shorts [...]

  4. Another issue is that programming skills/tools aren’t scaling up to the increasing number of cores available per CPU.

    With unused cores, I guess apps are not using the CPU to their full capacity and doing more work per watt.

  5. Infreemation » Blog Archive » Can code be bad for the planet? Sunday, October 14, 2007

    [...] has an interesting post on Earth2Tech, the thesis of which is that inefficient coding practices can lead to environmental [...]

  6. Bad Code is Finally Bad Code Monday, October 15, 2007

    [...] have worked on was written by people who don’t really know how to write software. I just read a post here explaining how, in the author’s opinion, bad code matters [...]

  7. Your Bad Code Is Killing My Planet at Virtual Generations Monday, October 15, 2007

    [...] Continue at source(Alistair Croll, Earth2Tech.net) [...]

  8. I have to disagree with you – coding isnt all that matters. Plenty of sites with poor coding get ranked ._.

  9. contentious.com – links for 2007-10-16 Tuesday, October 16, 2007

    [...] Your Bad Code Is Killing My Planet « Earth2Tech “With 10 instances of my application installed in a data center, I’m using 5 machines — while Joel only needs 1. I’m 5 times as bad for the planet as Joel. This hits my wallet: Amazon’s Elastic Computing service charges 10 cents/processing hr.” (tags: technology energy environment efficiency processes problems software) [...]

  10. Kyle Brady: A Blog » Blog Archive » Bad Code Is Now “Bad”? Tuesday, October 16, 2007

    [...] I’m sorry, I was under the impression that programmers had come to a general conclusion in the ’70s that “bad” code was unacceptable, and the sign of either someone of inferior skills, or of low intelligence. Apparently, I was wrong, and “bad” code has now been deemed “bad”… [...]

Comments have been disabled for this post