7 Comments

Summary:

[qi:gigaom_icon_cloud-computing] There’s been a lot of discussion lately about the real-time web and the problems it poses for incumbent search companies and technologies. Fast-moving trends and the availability of up-to-the minute updates mean that purely historical answers are missing crucial information. Dealing with constantly growing information […]

[qi:gigaom_icon_cloud-computing] There’s been a lot of discussion lately about the real-time web and the problems it poses for incumbent search companies and technologies. Fast-moving trends and the availability of up-to-the minute updates mean that purely historical answers are missing crucial information. Dealing with constantly growing information streams causes performance and scalability problems for existing systems and calls into question the mechanisms for compiling, vetting and presenting results to users.

While these challenges may sound new, game-changing performance and scalability problems are also being faced in the more traditional realm of data analytics and large-scale data management. Driven by network-centric businesses that track user behavior to a fine degree, there has been an explosion in the speed and amount of information that companies need to make sense of, and an increasing pressure on them to do so faster than ever before. What needs to be recognized is that the inadequacies of existing systems in these two seemingly different environments stem from the same source — infrastructure built to handle static data simply doesn’t scale to data that is continuously on the move.

The information stream driving the data analytics challenge is orders of magnitude larger than the streams of tweets, blog posts, etc. that are driving interest in searching the real-time web. Most tweets, for example, are created manually by people at keyboards or touchscreens, 140 characters at a time. Multiply that by the millions of active users and the result is indeed an impressive amount of information. The data driving the data analytics tsunami, on the other hand, is automatically generated. Every page view, ad impression, ad click, video view, etc. done by every user on the web generates thousands of bytes of log information. Add in the data automatically generated by the underlying infrastructure (CDNs, servers, gateways, etc.) and you can quickly find yourself dealing with petabytes of data.

Batch processing

The commonality between real-time web search and big data analytics problems is rooted in the need to continuously and efficiently process huge streams of data. It turns out that traditional data analytics systems (such as database systems and data warehouses) and search engines are a poor match for this type of processing. These systems are built using batch processing, which involves information being collected, processed and indexed, then made available for querying and analysis, often with a cycle time of a day or more. This is not unlike the way programming used to be done in the days of punch cards -– create a card deck, wait for your turn, and come back the next day to see if it worked.

Batch processing, however, leads to two problems: First, and most obvious, is the time lag (“latency”) inherent in such processing. Batch processing systems typically have high startup costs and overheads, so efficiency improves as you increase the batch size. Larger batch sizes also make it easier to exploit the resources of ever-larger clusters of servers. In a batch world, throughput is improved by delaying the processing of information — exactly the opposite of what’s needed for real-time anything.

The second problem with the batch approach is that it wastes resources. For example, data warehouses typically ingest data through an ETL (Extract, Transform, Load) process that writes data into disk-based tables. Subsequent queries then hunt for that recently stored data and pull it back into memory to process it. All of this data movement is hugely expensive in terms of I/O, memory and networking bandwidth.

The batch approach stems from viewing information as something that is stored rather than something that flows. The real-time web is a perfect example of where this way of thinking fails; the much larger information stream generated by all web activities is a less visible but even more extreme case.

A mindset shift

The big data problem has fed a surge of activity in data analytics systems. The flurry of new data warehousing and database vendors and the increasing adoption of the Google-inspired Hadoop stack are driven by these new data management challenges. While there have been some innovations in terms of efficiency in these systems (such as highly compressed columnar storage and smart caching schemes), the basic approach has been to rely on increasing amounts of hardware to solve ever-bigger problems. Such systems have not addressed the fundamental mismatch between batch-oriented processing and the streaming nature of network data.

The excitement around real-time web provides a great opportunity to reassess the way we think about information and how to make sense of it. While there will always be a need to store information and to search through historical data, many of the analysis and search tasks that users need to perform can be done in-stream. This type of processing has both efficiency and timeliness benefits. For example, real-time search and trend analysis of the tweetstream can be done continuously as tweets are being created.

This doesn’t mean that the need for managing stored data is going away. In fact, most useful applications will need to combine streaming data with stored historical data, and in-stream processing is an extremely efficient way to compute metrics to be stored for later use. The point is that all processing that can be done in-stream should be. And such processing should not be limited just to the emerging “real-time” web. Applications that can map activity on the real-time web with information about past and present user activity on the traditional web will be perhaps the most useful of all. For example, a spike in tweets about a particular band could be used as a predictor of demand at an online music store. Conversely, the real-time web could be monitored for explanations for an observed spike in user activity patterns, video popularity or music downloads.

The key to enabling such applications is to move from the “data as history” mindset to one of “data as streams.” Fortunately, the real-time web is providing a great opportunity for all of us to rethink our approach to making sense of the ever-increasing amount of information available, no matter where it comes from.

Michael Franklin is the founder and CTO of Truviso, and a Professor of Computer Science at UC Berkeley.

  1. So are you suggesting, having “living” modules that analyzes one particular core stream, and consistently spins off millions of other live streams into space, not necessarily being stored unless someone comes along and wants to start to “mine” this virtual stream for data?

    I am not even sure what im taking about, lol.

    Share
  2. Great post, Michael. Event oriented applications are indeed the new black but the irony isn’t lost that in order to do interesting things with the stream, you often have to collect an ocean of data. As stream and big data processing technologies proliferate and are increasingly commoditized, it’s clearer that each by themselves offer distinct elements of value but the combination provides the unique and more valuable capacity to find real time signal amidst oceans of noise. Overcoming the concurrency challenges and building great applications with the signals are where all of the fun is.

    Share
  3. Craig Kerstiens Sunday, July 12, 2009

    Esssentially yes, treating the continuous or real time data as a continuous flow, operate on it as it comes in, then if you need to store it and do deeper analysis you can, but it’s separate from the realtime insight process.

    Share
  4. The real time data collected must be associated with the application of it. For example, the real-time traffic data collected by the Dash Express had real value to other users (as well as other consumers of traffic data).

    Twitter searching may mine trends and public opinion reactions to events, but it doesn’t serve well as a replacement for Google because there is too much data with too little context. Real-time information is important, but there is still value in organizing it and putting it in context. In military terms, first reports are often wrong – the “fog of war” effect.

    Share
  5. The real problem with real time data analytics is that people cannot react to it fast enough. At the end of the day these streams of data will make no sense if companies do not have the time to react to it. That process involves at times a number of departments and coordinating an action through these take time. It may force different organization structures but I am no org theorist so I will leave it at that but as things stand right now reacting to “real time” data is like timing the stock market v/s making long term bets based on sound analysis. The former may sound exciting but the latter is more sustainable and produces better returns. Do you want to be a stock broker or Warren Buffet…

    Share
  6. [...] data in near real-time. But as Truviso founder and UC Berkeley CS Professor Michael Franklin recently noted, there are mountains of structured data generated by web apps that lend themselves to real-time [...]

    Share
  7. [...] Why Big Data & Real-time Web Are Made for Each Other. [...]

    Share

Comments have been disabled for this post