Mapping Session notes: big data analytics in Europe

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

This post was written by Kris Tuttle, with additional contributions from Paul Miller.

We recently hosted our first GigaOM Structure event in Europe, which went very well. During the conference we held a Mapping Session to sketch out the trends in the market for big data analytics in Europe.

We invited about 40 GigaOM readers to join in an interactive discussion with a panel of analysts to assess the impact of Hadoop in Europe and the trends driving the adoption of new analytics tools. GigaOM Pro analysts Paul Miller and Kris Tuttle led the hour-long conversation.

During the Mapping Session, we discussed a number of trends influencing the market as well as obstacles holding back the adoption of next-generation BI tools. The panel and audience participants agreed universally that Hadoop was the new de facto platform for big data analytics projects, although they were keenly aware of its limitations. The audience was split fifty-fifty between those who looked at big data and analytics as a general-purpose platform and those who had application-specific big data projects.

Audience participants were mainly concerned about the complexity in this space and the fact that skilled resources are limited. There has been alarm that the industry will need far more data scientists to effectively work with these new tools. However, the industry is evolving quickly: Many of the projects are about ease of use, and some leading practitioners are starting new companies to encapsulate their expertise in a for-profit offering.

Many in the room acknowledged that for most problems a few different data analytics tools would be needed. Some are Hadoop-focused like HBase, and others offer different data-processing models that operate on links, columns, or streams. They agreed that Hadoop is not the solution to all data-analytics projects. The need for multiple tools is especially acute when the requirements are for real-time or near-real-time processing. And while Hadoop is moving in this direction, it’s still very early.

The adoption of machine-learning approaches to big data analysis and interest in languages like R was fairly low and stands in contrast to what has become a popular movement in the U.S. around big data processing models.

In summary, the activity level around big data in Europe is high and is focused on newer, open-source foundational software like Hadoop. At the same time it’s clear that more and better tools are needed, and even then Hadoop is not suitable for all big data problems. More tools and technologies along with shorter and simpler paths to customer success are sorely needed.