Since Hadoop emerged in the early 2000s, a whole ecosystem has sprouted up. The buzz still has not abated, nor has the need for further development of Hadoop as a platform. Entrepreneurs who build applications around Hadoop talked about the use cases they see the most and the next essential steps for the ecosystem to grow at GigaOM’s Structure:Data event on Thursday.
Jonathan Gray, founder and chief technology officer of Continuuity, said he’s seen lots of Hadoop implementations for analytics applications for gaming and advertising purposes. “Those guys have tons of data,” he said. “(They) run off the back of analytics.” Popular use cases include attribution and retargeting of advertisements.
While use cases have popped up across many industries, there isn’t one Hadoop panacea. Different parts of the Hadoop ecosystem — such as HBase, MapReduce and so on — might fit different sorts of applications, said Muddu Sudhakar, vice president and general manager of the Pivotal Initiative’s Cetas cloud and big data analytics platform.
Sudhakar identified a few ways in which Hadoop needs to adapt further.
For starters, he said, “Hadoop needs to be virtualized,” to enable the sort of dynamic resource management popular in public clouds. For now, power consumption for Hadoop deployment across many servers can be very expensive. “We are living in the cloud world, so that whole thing needs to be solved,” he said.
Hadoop might be able to process large data sets, but it could take you a while. That’s why Sudhakar called for “Hadoop high throughput, low latency.”
Look to Google to get a sense of where Hadoop capabilities need to go, said Omer Trajman of WibiData. “It’s amazing to look (when) Google sends you messages from the future,” he said. The ultimate feature is I’m Feeling Lucky — just tell me what’s next. That’s what everyone else is going to need.”
Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below:
A transcription of the video follows on the next page