18 Comments

Summary:

In just a few years, big data has turned from a buzzword and concept best left for large web companies into a force that drives much of our digital lives. Here are five technological trends that will change how data is processed and consumed going forward.

It’s time to rethink the who, what, where, why and how of big data. After a surge of important news in the past couple weeks, we’re approaching a period of relative calm and can finally assess how the space has evolved in the past year. Here are five trends shaping up that should change almost everything about big data in the near future, including how it’s done, who’s doing it and where it’s consumed. Feel free to share the trends you’re seeing in the comments.

The democratization of data science

The amount of effort being put into broadening the talent pool for data scientists might be the most important change of all in the world of data. In some cases, it’s new education platforms (e.g., Coursera and Udacity) teaching students fundamental skills in everything from basic statistics to natural language processing and machine learning. Elsewhere, it’s products such as 0xdata that aim to simplify and add scale to well-known statistical-analysis tools such as R, or, like Quid that try to mask the finer points of concepts such as machine learning and artificial intelligence behind well-designed user interfaces and slick visual representations. Platforms such as Kaggle have opened the door to crowdsourcing answers to tough predictive-modeling problems.

Whatever the avenue, though, the end result is that individuals who have a little imagination, some basic computer science skills and a lot of business acumen can now do more with their data. A few steps down the ladder, companies such as Datahero, Infogram and Statwing are trying to make analytics accessible even to laypersons. Ultimately, all of this could result in a self-feeding cycle where more people start small, eventually work their way up to using and building advanced data-analysis products and techniques, and then equip the next generation of aspiring data scientists with the next generation of data applications.

Hadoop’s MapReduce reduction

Hadoop’s days as a platform solely for performing MapReduce jobs are officially over, and the change couldn’t have come fast enough. The evolution began with Apache Hadoop version 2.o and its new YARN functionality that allows for new processing frameworks, but solidified with the spate of projects and products — including Cloudera’s very popular commercial distribution — that now include a SQL query engine or other method for interactive analysis running alongside MapReduce. That was a big item to check off the list of capabilities Hadoop must support, as data analysts need access to Hadoop data in a manner they understand.

Doing Hadoop-powered BI with Platfora

From this point on — like with the Google MapReduce framework on which Hadoop’s version of MapReduce was modeled — it seems likely we’ll see the latter grow less important. Presumably, the Hadoop community will focus more on using the platform’s distributed nature to support real-time processing and other new capabilities that make Hadoop a better fit in next-generation data applications. If Hadoop can’t fill the void, there are plenty of people working on other technologies — Storm and Druid, for example — that will gladly do so.

The HBase NoSQL database that’s built atop the Hadoop Distributed File System is a good example of what’s possible when Hadoop is freed from the MapReduce constraints. Large web companies such as Facebook and eBay already use HBase to power transactional applications, and startups such as Drawn to Scale and Splice Machine have used HBase as the foundation for transactional SQL databases. More new products and projects, such as graph database Giraph, will look for ways to leverage HDFS because it gives them a file system that’s scalable, free, relatively mature and, perhaps most importantly, tied into the ever-growing Hadoop ecosystem.

Coming soon to an app near you

Of course, all of this technological improvement is nothing without applications to take advantage of it, so it’s good news that we’re seeing a wide range of approaches for making this happen. One of these approaches is making big data accessible to developers, which is where startups such as Continuuity, Infochimps and even Precog (a big data BI engine, by nature) come into play. They make it relatively easy for developers to create applications that tie at least some functions into a big data backend, sometimes via a process as simple as writing a script or generating a piece of code that programmers can insert directly into their application’s code.

Another approach that’s picking up steam is simply to find a use case for big data –analyzing user behavior, network security, artificial intelligence, customer service — and turn it into a product or service that companies can buy and start using out of the box. These are things that early adopters such as Google, Facebook and others have had to build themselves but that others likely won’t have to. And everywhere you look, big data and data science are already being rolled into many web and mobile applications, from deciding which products to buy to figuring out your long lost relatives. Somewhere, somehow, everyone surfing the web or using a mobile app is benefiting from big data.

Machine learning is everywhere

Machine learning has had something of a coming-out party in the past year and is now so prevalent it might be easy to mistake it for something that’s not difficult to do well. It’s easy to see why machine learning is so popular, though: In an age where consumers (and advertisers) want more personalization, and where computer systems are overwhelmed with data flying at them from all different directions, the prospect of writing models that continuously discover patterns among potentially countless data points has to be appealing.

Here’s a small sample of apps you’ve likely heard of, or that we’ve covered, that rely machine learning to work their magic: Prismatic, Summly, Trifacta, CloudFlare, Twitter, Google, Facebook, Bidgely, Healthrageous, Predilytics, BloomReach, DataPop, Gravity. I could go on for days, I think.

Prismatic learning my interests

Now, it’s difficult to imagine a new tech company launching that doesn’t at least consider using machine learning models to make its product or service more intelligent. Heck, even Microsoft appears to be making a big bet on machine learning as the foundation of a new revenue stream. The technology to store and process lots of data is out there, and the brainpower looks to be coming along as well. Soon, there will be few excuses for building applications that don’t learn as they go, for example, what users want to see, how systems fail or when customers are about to cancel a service.

Mobile data as the engine for AI

Long before Skynet takes over and the machines turns on humans, our mobile phones will know better than us what we want to do. That’s because until technologies like Google’s Project Glass actually make their way into the wild, our phones and the apps on them are probably the richest source of personal data around. And thanks to machine learning, speech recognition and other technologies, they’re able to make a lot of sense of what they’re given.

They know where we go, who our friends are, what’s on our calendars and what we look at online. Thanks to a new generation of applications such as Siri, Saga and Google Now trying to serve as personal assistants, our phones can understand what we say, know the businesses we frequent and the foods we eat, and the hours we’re at home, at work or out on the town. Already, their developers claim such apps can augment our limited vantage point by automatically telling us the best directions to our upcoming appointment, or the best place to get our favorite foods in a city the app knows we haven’t been to before.

The race is officially on to see who can build the smartest app, pull in the most data sources and figure out how to best display it all on a 4-inch screen.

Feature image courtesy of Shutterstock user Sebastian Kaulitzki.

  1. Interesting write-up and very connected to the small-vendor world. That’s a good thing since many small vendors have a hard time finding their voice with the shouting going on by the largest. Your piece brings up a very good point without stating it outright…that Big Data is not a theory anymore and is very much a given for success in 2012.

    The risk is that the opportunists will also show up. We describe that as, “The data gold rush is officially on”: http://successfulworkplace.com/2012/11/03/the-data-gold-rush-is-officially-on/

    The hard part is reading through the hype and seeing the value where it exists. Use cases, success stories around Big Data are becoming more important than ever.

    Share
  2. Phil Simon  Sunday, November 4, 2012

    Typo – Datahero, Infogram and Statwing are tyring to make analytics accessible even to laypersons

    ‘trying’

    Share
    1. Lloyd B Hopkins Friday, November 9, 2012

      so is NetNow

      Share
  3. Nicholas Paredes Sunday, November 4, 2012

    I think the biggest trend not mentioned is the term “big data” becoming as meaningless as “cloud computing”. Those vendors that can go beyond buzzwords and solve specific, high-value pain points will win over the long term…

    Share
    1. Well said. I prefer to say ubiquitous data as that’s the real challenge. Sure, volume, velocity, variety are on the rise, but that’s an incremental problem solved by technology. The challenge is to make systems that are harmonious with data’s ubiquity…systems that can find information and react to it. Now that’s interesting.

      Share
  4. Might be interesting to also look at ROOT (http://root.cern.ch) which is the “Big Data” framework developed at CERN and was used to find the Higgs signal in PetaBytes of experimental data. See:
    http://root.cern.ch/drupal/category/image-galleries/higgs-plots
    ROOT is entirely Open Source (LGPL).

    Share
  5. Fascinating look into the variety of products and services that are helping companies to understand or use big data. The area that we are focused on, a few steps down the ladder, hasn’t been covered.

    We are working with marketers to quickly uncover actionable insights and to do this we have to speak their language. Big data is rarely in their vocabulary. Any software needs to be incredibly easy to use and no data, programming or statistical knowledge can be required.

    Whilst Data Scientists and Marketers can work together, we won’t see wholesale changes of consumer engagement until marketers are ubiquitously using data for themselves. This needs to be in all companies, not just the digitally native ones. Here’s a blog article that was written for the ESCP Europe Creativity Marketing Centre (European Business School) http://bit.ly/PL9FbW

    Share
  6. The ongoing data science revolution is a promising development, but one that has been somewhat shortsighted so far. As this article indicates, there is a strong push to teach people about various analytical techniques and tools, which is good. We need people who understand statistics and can use software, like R.

    But how much attention has been given to critical thinking, problem formulation, content knowledge, and research methods? Excellent quantitative and programming skills are not very useful without the ability to ask the right questions and design the kind of study needed to answer those questions.

    Going forward, I think organizations that view data science as a complete, end-to-end research process will be much more successful than those that think of it merely as the analysis of large, pre-existing bodies of data.

    Share
    1. Melanie Jones Monday, November 5, 2012

      I agree that it’s important to know what the right questions are to ask. We have so much information at our finger tips, even without some of these new tools, it’s always a good idea to take a step back and think… “What decisions can I actually make with this knowledge? Is this important?”

      And of course… the eternal favorite question of service providers and consultants (at least the good ones) – “Why?”

      Share
  7. Derrick, we are seeing an increase in businesses seeking specialized skills to help address challenges that arose with the era of big data. The HPCC Systems platform from LexisNexis helps to fill this gap by allowing data analysts themselves to own the complete data lifecycle. Designed by data scientists, ECL is a declarative programming language used to express data algorithms across the entire HPCC platform. Their built-in analytics libraries for Machine Learning and BI integration provide a complete integrated solution from data ingestion and data processing to data delivery. More at http://hpccsystems.com

    Share
  8. As mentioned in some of the discussion above I think a key issue is that BigData is no longer just the domain of the large companies. Companies of all sizes now facing large amounts and importantly a large variety of data many companies are not in a position to be hiring data scientists to help deal with their data problem.

    At BIME moving forward we are very excited about BigData analytics in the cloud – Google BigQuery offers an analytical database as a service that scales to petabytes of data. It means companies that previously would have needed very large infrastructure and an operational team can now analyze their data with only a web browser. http://bigquery.bimeanalytics.com/

    Share
  9. Dennis D. McDonald Tuesday, November 6, 2012

    The angle I am researching is not “big data” per se but = how the data are generated in the first place, e.g., by government agencies that in the course of their legislatively mandated programs produce data that can be used by their target users as well as by others. More here: “A Framework for Transparency Program Planning and Assessment” http://www.ddmcd.com/outline.html

    Share
  10. Bringing features enabled by big data down to the hands of small businesses and consumers is absolutely the next step. At SRCH2 (http://srch2.com), we’re working on enabling small and mid-sized e-commerce retailers to offer high-end full-text search, and the many features that they are not yet tapping. These include: fuzzy search, full-text search, rapid geo-search, real-time updates, and much more. There’s still a whole lot left to do.

    As for opportunists, that is clearly a risk. One of the things we find is that many stretch the definition of “big data,” and many also offer tools which are warmed over and modified versions of existing search software. These approaches are limited, as backwards integration of existing solutions leads to sub-optimal performance. If you have gone through the trouble of creating a big data stack, with several new elements built for speed and size, the last thing you want to do is have your search be the new bottleneck.

    Share

Comments have been disabled for this post