IBM’s first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM’s hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM’s cloud computing division, Kristof Kloeckner, to learn more. Below is an edited account of our talk.
GigaOM: Let’s start with the hardware underlying IBM’s CloudBurst offering. How does this compare with what Cisco is doing or other cloud hardware out there?
Kloeckner: This first instance for test and development workloads is built on Intel-based blades, but we anticipate other workloads might run on different platforms. We are actually working with the mainframe team for particular workloads. We have a prototype running that has p-series and z elements for SAP workloads.
GigaOM: So in IBM’s view the workloads dictate the hardware, rather than the idea of commodity servers being used to build out a general purpose cloud?
Kloeckner: We make the hardware selections based on the workloads you want to run, and we optimize the workload for you. But because it is in the cloud, in terms of what do you see as a client as to how each different cloud behaves, it’s all entirely consistent.
GigaOM: Why focus on workload-specific clouds?
Kloeckner: One should really instantiate clouds with the workloads that you run on them in mind. Depending on what the delivery needs are you might have an analytics cloud separate from your collaboration cloud, and you might also decide you want to keep the test and development cloud in-house, and then expand into the public cloud for collaboration services.
GigaOM: Why focus on test and development clouds for your first products?
Kloeckner: When we looked at development and test it’s considered so crucial for accelerating the business value of IT, and we think that making dev and test more efficient and accelerating the process through automation was extremely attractive. About 30-50 percent of our client’s resources are devoted to dev and test. It’s also part of the infrastructure that’s not well managed. For example, after new apps are tested, in some cases the department doesn’t want to give up access to those resources because it may take a long time to get them back. Making test and dev dynamic can be instantly attractive.
GigaOM: Is it also a focus because other enterprises are already using public clouds like Amazon’s EC2 for those workloads?
Kloeckner: The general practice of dev and test is to have it in-house. There is no massive trend by organizations to bring that out into the public cloud infrastructure. We see individual organizations try it out, but enterprise development is mainly in-house today. And this is the first of a whole series of offerings to come. We’re going to look at analytics and business apps in the future, but we started with dev and test.
GigaOM: If the vision of workload-specific clouds proliferates, how do enterprises work across different clouds? Does IBM have a solution for that?
Kloeckner: We demonstrated some early solutions with Juniper’s switching technology back in February and use our job scheduling software to schedule across domains. We have our efforts on the Open Cloud Manifesto, and have had public demonstrations extending our service management software so it can manage workloads in a variety of clouds. We do not have a packaged solution yet, but we can work with clients to extend across multiples clouds.