Steve Mills may not have received the CEO nod many thought he would get, but the SVP of IBM’s (s ibm) Software and Systems (read “all its products”) group still has a lot to say about the company’s strategy going forward. Monday, I sat down with him to discuss what IBM has in the works around big data, and he didn’t mince words in asserting IBM’s ambitious plans and slamming rivals.
For starters, Mills said that he expects IBM will do $16 billion in annual analytics revenue by 2015. That’s a big number — it would comprise about 16 percent of IBM’s nearly $100 billion in revenue for 2010 — but it has to be. IBM has spent more than $14 billion on analytics acquisitions over the past five years, including the purchase of Algorithmics for $387 million just last month.
His confidence seems based on the fact that IBM is a always among the leading choices for the world’s largest companies when they deploy analytics systems. Despite the glut of startups promising the next great Hadoop-based product or BI tools, Mills said IBM is in a unique position because it’s able to work with and integrate all of a company’s data to derive real insights that span data silos. Further, he said, it has the support prowess to actually help customers best figure out the questions they need to ask.
No fan of Oracle
Mills isn’t too impressed with Oracle (s orcl) either, even if it is now talking the talk around new-school technologies such as NoSQL and Hadoop to feed its high-end Exadata data warehouse. IBM already has a Hadoop product on the market, and he said it has been doing non-relational database techniques for years, even it it hasn’t been marketing them as NoSQL. Oracle beats its chest about Exadata, he said, but “at the top end of the market, it’s IBM and Teradata (s tdc) battling it out.”
With regard to Oracle’s position as one of three companies actually creating microprocessors for enterprise servers (IBM and Intel (s intl) being the two others), Mills said Oracle’s Sparc processor is on its last breath. As evidence, he points to the use of Intel Westmere processors even within the high-performance Exadata system.
Mills also thinks that analyzing consumer sentiment from social media and other digital sources will be a major driver of big data deployments for customers with consumer-facing businesses, and IBM has been out in front of nearly everybody on that use case. Just last week, I wrote about an IBM project performing Twitter-based sentiment analysis of the World Series.
Servers must evolve
However, he added, big data and a general appetite for more computing are making life difficult even for IBM and Intel. As server density increases and puts more power — and more heat — into smaller spaces, chipmakers have to make tradeoffs between parallelism and frequency. Server makers have it even harder because they have to consider additional heat-producing and energy-sucking elements such as DRAM, SSDs and hard disks.
Although 3-D transistors and new materials will improve things, Mills said, we’re still in a silicon world. Even x86 alternatives such as ARM processors, which companies such as Calxeda and Nvidia (s nvda) are targeting for use in servers, aren’t immune to the laws of physics, he explained, even if they do offer impressive heat and power-consumption metrics at the moment.
On Platform buy
Mills also commented on IBM’s recent acquisition of Platform Computing, which he said has a lot of overlap with the big data space. I noted at the time that IBM likely was interested in getting deeper into the accounts of Platform’s large financial services customers as they transition their grid-like systems to address cloud computing and big data, and Mills echoed that sentiment.
“Why’d we buy Netezza?” he asked rhetorically. “Same reason.” They were companies with good technology and solid customer bases that IBM thinks it can take to a broader market.