4 Comments

Summary:

Moore’s Law has enabled new applications by powering computing on an exponential price/performance curve. But increasingly, the proliferation of a new generation of large-scale applications is being constrained by another price/performance curve that hasn’t shown much improvement: IT operations and the cost of delivery. To create […]

structure_speaker_seriesMoore’s Law has enabled new applications by powering computing on an exponential price/performance curve. But increasingly, the proliferation of a new generation of large-scale applications is being constrained by another price/performance curve that hasn’t shown much improvement: IT operations and the cost of delivery. To create ever more sophisticated applications that can be delivered from public or private clouds, we have to ride a delivery cost curve that looks more like Moore’s Law. Otherwise, we’ll choke on our systems.

Timothy Chou, ex-president of Oracle On Demand, has written a book (“Cloud: Seven Clear Business Models“) that takes a fresh perspective on cloud computing. To him, the key promise of the cloud is to reduce the cost of delivering applications by improving IT operations. Traditional legacy applications such as Oracle or SAP have a fully loaded cost of delivery of $1,000-$1,500 per user per month. Several years ago, Oracle On Demand got that cost down to $50-$100, whether it was Oracle-hosted or customer-hosted. SalesForce.com has squeezed that cost down even more to $7-$10, though admittedly just for the much lighter-weight CRM portion of the suite.

Private clouds are critical to the success of this new way of computing because trillions of dollars are locked up in the enterprise installed base. Some of that has to be brought forward, and more of it has to interoperate with the infrastructure and applications build in the private cloud. Some level of compatibility with what’s come before in the enterprise, starting with the management tools, is likely. Those tools, however, will have to manage more than just Infrastructure as a Service (IaaS). Ultimately, management tools will also have to measure, monitor and remediate application service problems in a highly automated fashion in order to achieve the industry’s price/performance improvements.

Tackling Infrastructure Without Wrecking QoS
For most businesses, the journey starts with standardizing, consolidating and virtualizing the very bottom layers of the stack: servers, storage and networks. Add in self-service so the application owners can bring online the required infrastructure themselves. Add a metering capability so IT can measure and charge the application owners for the exact amount of infrastructure they consume. At this point, IT has a private infrastructure as a service.

George Gilbert

George Gilbert

One of the big challenges with IaaS is to ensure more business-critical applications can meet their Quality of Service (QoS) requirements. Historically, QoS was ensured by heavy over-provisioning and by hardwiring each application from top to bottom with its own dedicated infrastructure. The whole point of IaaS is that applications can share infrastructure and are no longer hardwired to it.

Virtualization looks like it can make a new type of IaaS possible. Previously, vendors such as EMC and IBM specialized their individual products in one layer, selling products as storage or servers. Each was optimized to be best in class in price, performance, or cost of ownership. But no one delivered them with vertical integration similar to a mainframe so that they were collectively optimized for TCO — until now. Cisco’s Unified Computing System is the first hardware to be integrated across individual product categories and still deliver the two critical ingredients of IaaS. First, virtualization can still make individual layers look like pools of servers or storage. Second, Cisco automated how the hardware configures itself so that the software running on it thinks it’s hardwired for QoS.

A Cloud OS Becomes the New Management Layer
A cloud OS ultimately has to be able to see inside an end-to-end business service as well as understand how all the physical and virtual infrastructure fits together. Then it has to orchestrate how the infrastructure will support the QoS requirements the application owner requested, including availability, performance and security.

Only a small number of cloud operating systems will be technically and economically viable. No one vendor can write all the “hooks” for all the applications. Drawing ISVs and corporate developers onto one or a few platforms is likely. The dark horse in this race is Microsoft. Because Windows is the highest volume deployment platform in the enterprise, the company may be able to draw developers into writing its hooks for Systems Center.

Juergen Urbanski

Juergen Urbanski

Two other scenarios are worth watching. Apps and infrastructure could remain as incredibly heterogeneous as they’ve always been in the enterprise. As a result, the cloud OS might be only partly software and largely custom code that systems integrators stitch together for each customer. Alternatively, the storage and networking vendors might resist a cloud OS layer that virtualizes and homogenizes each vendor’s differentiating functionality. Instead, they might try to “punch through” a thin cloud OS layer that is forever trying to keep up with storage or networking vendor-specific functionality. That certainly appears to be happening even between VMware and the storage layer it has partially put on top of EMC, NetApp, et al.

Hurdles Ahead for a Future Cloud OS
A single vendor software solution like Microsoft’s is most likely in small and medium enterprises that don’t have the heterogeneity and complexity of the large enterprise. Large enterprises may ultimately rely on a mixture of off-the-shelf products and custom integration to make their cloud operating system work.

Making a new cloud OS work requires changes to the people and processes managing IT operations. IT administrators have traditionally organized themselves into server, storage, network and application tribes. Dramatically reducing the cost of IT operations will require unprecedented levels of standardization, specialization and automation across these traditional administrative silos.

And finally, the transparency created by more visible service level agreements and charge-backs enables benchmarking internal IT against large and well-run external service providers. New organizations and processes will come from external pressure. Because IT operations can be metered and billed back to each application owner, IT itself will have to compete for resources with external cloud service providers.

George Gilbert is partner and co-founder of TechAlpha, a strategy consulting firm to the technology industry. Juergen Urbanski is managing director of TechAlpha.

  1. [...] More: Private Clouds: IT Operations Finally Meet Moore’s Law [...]

    Share
  2. [...] This post originally appeared on GigaOm here. [...]

    Share
  3. Excellent article. I look forward to chatting with you tomorrow. I’d like to understand how we can break that fully loaded cost into some more specific chunks.

    Rodrigo Flores,
    CTO
    http://www.newScale.com (company)

    Share
  4. [...] George Gilbert: Private Clouds: IT Operations Finally Meet Moore’s Law [...]

    Share

Comments have been disabled for this post