Blog Post

The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing

Stay on Top of Emerging Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Kristof KloecknerIBM’s (s ibm) first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM’s hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM’s cloud computing division, Kristof Kloeckner, to learn more. Below is an edited account of our talk.

GigaOM: Let’s start with the hardware underlying IBM’s CloudBurst offering. How does this compare with what Cisco (s csco) is doing or other cloud hardware out there?

Kloeckner:  This first instance for test and development workloads is built on Intel-based (s intc) blades, but we anticipate other workloads might run on different platforms. We are actually working with the mainframe team for particular workloads. We have a prototype running that has p-series and z elements for SAP workloads.

GigaOM: So in IBM’s view the workloads dictate the hardware, rather than the idea of commodity servers being used to build out a general purpose cloud?

Kloeckner: We make the hardware selections based on the workloads you want to run, and we optimize the workload for you. But because it is in the cloud, in terms of what do you see as a client as to how each different cloud behaves, it’s all entirely consistent.

GigaOM: Why focus on workload-specific clouds?

Kloeckner: One should really instantiate clouds with the workloads that you run on them in mind. Depending on what the delivery needs are you might have an analytics cloud separate from your collaboration cloud, and you might also decide you want to keep the test and development cloud in-house, and then expand into the public cloud for collaboration services.

GigaOM: Why focus on test and development clouds for your first products?

Kloeckner: When we looked at development and test it’s considered so crucial for accelerating the business value of IT, and we think that making dev and test more efficient and accelerating the process through automation was extremely attractive. About 30-50 percent of our client’s resources are devoted to dev and test. It’s also part of the infrastructure that’s not well managed. For example, after new apps are tested, in some cases the department doesn’t want to give up access to those resources because it may take a long time to get them back. Making test and dev dynamic can be instantly attractive.

GigaOM: Is it also a focus because other enterprises are already using public clouds like Amazon’s (s amzn) EC2 for those workloads?

Kloeckner: The general practice of dev and test is to have it in-house. There is no massive trend by organizations to bring that out into the public cloud infrastructure. We see individual organizations try it out, but enterprise development is mainly in-house today. And this is the first of a whole series of offerings to come. We’re going to look at analytics and business apps in the future, but we started with dev and test.

GigaOM: If the vision of workload-specific clouds proliferates, how do enterprises work across different clouds? Does IBM have a solution for that?

Kloeckner: We demonstrated some early solutions with Juniper’s (s jnpr) switching technology back in February and use our job scheduling software to schedule across domains. We have our efforts on the Open Cloud Manifesto, and have had public demonstrations extending our service management software so it can manage workloads in a variety of clouds. We do not have a packaged solution yet, but we can work with clients to extend across multiples clouds.


20 Responses to “The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing”

  1. I think cost is not just the only factor towards migrating to cloud & IBM is thinking on those terms of manageability, monitoring & cloud provisioning which most public cloud vendors lack. Also enterprise side cloud migration is picking up & im sure most are thinking in terms of doing a charge back process & support all internal customers. Virtualization has made it easy to share & antomate the workloads to scale up or down but stack over these virtualization solutions those provide the automation & management are the key to a successfull virtualized environment.

  2. Not sure why my first comment didn’t got thru, so I’m trying reposting it.
    The IBM announcement is a bit confusing as it looks like they are selling virtualization enabled equipments. While virtualization may be related to the cloud, IMO they are not quite equivalent.
    On the other hand, one of the main principles of the cloud is that you don’t need specific assumptions as the cloud will scale up and down with almost linear costs. The IBM offer is based on an initial assumption about the workload which to me seems to contradict the whole cloud idea.


    Everything about the cloud

  3. I must confess that I’m still a bit puzzled by this ‘announcement’. My impression is that IBM is offering equipments prepared for virtualization, which may be connected to cloud, but not necessarily.
    One of the main assumptions of cloud computing is that you don’t need to use any assumptions before hand as the solution will allow you to scale up and down with almost linear costs. Well, IBM offering starts based on the workload assumption and this contradicts the notion of cloud.


  4. IBM’s strategy is smart in that it offers solutions that enterprises will use, in part because test-dev and certain business process aren’t very mission-critical. It appears to be a low-risk, relatively high-reward situation for everyone involved. That being said, I’m a little disappointed IBM didn’t follow through with its early cloud efforts and really push Tivoli-based Blue Cloud cloud management software, attached to BladeCenters if need be. But that market is plenty crowded by now.

    What I wonder, however, is how much companies actually will save putting this infrastructure in-house, or even running it on specialized hardware in the cloud, and then paying IBM for support. And when it comes to test-dev, there really are great options in the cloud, including Amazon and Skytap, that don’t require putting anything in-house. Isn’t that the premise of cloud in the first place?

    • Stacey Higginbotham

      Derrick, I am also skeptical of how much money can be saved with in-house clouds, especially specialized clouds for different workloads. IBM maintains that the clients using this are so big that they already have the infrastructure to provide clouds that can scale for peak demand in-house, and that customers do see savings with workload specific clouds. That may be true, but I wonder if they could see more if they outsourced.

    • Derrick, Stacey … this almost gives the feel of a way for folks to say they’re “doing cloud” without actually changing much, or for that matter gaining much.

      A modest step, very cautious.

      Still, it’s great to see IBM continue to ruminate and take steps into the market – it’s great for all of us who care about the future of cloud computing for the enterprise.

      Derrick, I think that cloud computing in the enterprise may include the “not put anything here” that you suggest, but will really gain legs when that is combined with the flexibility to scale up or down, that ability to scale far beyond individual box and conventional architecture limits, and the ability to have the whole mix of apps manage themselves, under your control (as you wish).

      Also think that self-operating commodity is crucial to doing all of this at significantly reduced costs. I agree that if anyone simply moves a mainframe somewhere else and / or adds even more tools and services, how is that going to help much?

      There is tremendous promise for the enterprise, but it takes more aggressive moves than this. Of course, all of this takes a great cloud application platform …

      • @ Bob … Like Appistry’s platform, perhaps? ;-)

        Anyhow, IBM corrected me regarding CloudBurst. That is a prepackaged, fully virtualized blade rack designed for private cloud initiatives. Here are the specs:

        — Base Hardware Configuration:

        * 1 42U rack
        * 1 BladeCenter Chassis
        * 1 3650M2 Management Server, 8 cores, 24GB Ram
        * 1 HS22 CloudBurst Management Blade, 8 cores, 48GB RAM
        * 3 managed HS22 blades, 8 cores, 48GB RAM
        * DS3400 FC attached storage

        — Cloud Software Configuration:

        * IBM CloudBurst service management pack
        * IBM Tivoli Provisioning Manager v7.1
        * IBM Tivoli Monitoring v6.2.1
        * IBM Systems Director 6.1.1 with Active Energy Manager; IBM ToolsCenter 1.0; IBM DS Storage Manager for DS4000 v10.36; LSI SMI-S provider for DS3400
        * VMware VirtualCenter 2.5 U4; VMware ESXi 3.5 U4 hypervisor

        The price tag of around $220K seems a little steep, but the monthly payment does include hardware, management software and hypervisor. Networking will be added to the stack in future incarnations. So IBM is offering an internal cloud platform on top of the services, which have been getting most of the attention.