7 Comments

Summary:

Big data has become a big buzzword. But for companies to benefit from it, they need more than technology – they need a plan. Consultant Chad Richeson has developed seven steps to help business leaders make sure their big data strategy delivers the results they want.

Briefcase

No longer the new technology on the block, big data continues to generate significant buzz.  Technologies such as Hadoop and HBase are seeing rapid growth, analysts are experimenting with new techniques and approaches, and business leaders are adapting their business models to rely more on the power of big data. McKinsey calls big data the “next frontier” for business, with the potential to transform business in the same way the Internet did over the past 15 years.

To take advantage of that potential, business leaders need to know what steps to take in order to make maximum use of their data asset. Business success with big data is not just about choosing the right cloud technologies or hiring smart data scientists – it’s about creating a business-centric approach that connects a company’s data to its business strategy, enables continual improvement, and follows through to impact processes, margins, and customer satisfaction.

In my experience with big data I have developed seven steps that can help any business leader drive more success with their big data initiatives.

1. Create a strategy for your data

Your data needs a strong strategy, one that connects to and underpins your business strategy and also integrates with department-level accountabilities. Develop a plan for where you want to be at each milestone, define your future capabilities clearly, and describe how your data capabilities will be utilized. Then hold your users accountable to where and how they will use the new capabilities, and what business impact they will drive.

For example, if you can use big data to improve in-store sales, you need to not only work with store managers to define capabilities that connect to their strategy, but also ensure the managers are held accountable to using the data capabilities correctly and delivering the intended impact. Doing so connects both your and their goals to the overall business strategy, helps create more usable capabilities, and ensures any needed iterations will be done jointly. A well-conceived data strategy will give you the most bang for the buck on your data investment.

2. Design for agility

Big data systems are just that…big, which means they tend to be inflexible. A great BI system, by definition, will cause a business to change, which in turn will require the BI system to change. Thus, your systems need to adapt quickly to keep pace with your business. Twelve- or 18-month release cycles are appropriate for certain parts of your system, but 3- or 6-month cycles may be appropriate for others. Carefully analyze each component of your big data system, and design for the right amount of agility you need.

You may decide to build higher levels of automation into the layers of your stack that change slowly, while reserving configuration-based approaches for layers of your stack that need to change quickly. In general, the top layers of your stack (e.g., user interfaces and reporting tools) need to be more agile than the bottom layers of your stack (e.g., data collection and storage), but many exceptions to this rule exist. Only careful analysis and understanding of your current and future uses of data will enable you to make the right decisions on agility. Designing for agility will enable your big data investment to keep pace with, and even lead, your business.

3. Understand latencies

Latency is a challenge in traditional BI systems, and big data only amplifies the problem. Big data solutions tend to be architected first as batch systems, with lower latency capabilities being addressed afterwards. Don’t save latency for last – analyze your key use scenarios in terms of latencies, and connect them clearly to business drivers. Focus on delivering the right latency for each need, including the value being driven, and let those needs drive your design.  Certain low latency needs may require bypassing your big data system temporarily, sharing directly between systems in order to deliver specific scenarios.

For example, if your customers tend to interact with system A and system B in parallel or in quick sequence, these two systems may need to share data directly. The data can then be written into the big data system in time to be used by other systems. Delivering data on a real-time or near-real-time basis can be very expensive; thus, it’s better to think in terms of “right-time” data targeted to each need.

Describe latency requirements in detail, and ensure the business justification is sound. Understanding latencies will enable you to deliver data exactly when it’s needed, while keeping costs under control.

4. Invest in data quality and metadata

Data quality in any system is a constant battle, and big data systems are no exception; however, big data systems require much more automation and advance planning.  You should first ensure that data quality is not treated as a project or initiative, but as a foundational layer of your data stack that receives adequate resourcing and management attention.  Second, build in multiple lines of defense – from data mastering (where, for example, you are creating customer accounts) to data collection (where you are recording all of that customer’s interactions with you) to metadata (where you are organizing and dimensionalizing the data to aid in future reporting and analysis).  Third, automate both the processes that identify and elevate data quality issues, and the measurement and reporting of data quality progress.  Empower your data quality team with tools that solve problems at high scale, such as diagnostic and workflow tools.  Efficient data quality practices will enable your big data system to earn its place as a trusted input for key business processes.

5. Get good at prototyping

The data sizes in most big data systems are too large to work with all at once, so it’s typically wiser to build small-scale prototypes to iron out the wrinkles and ensure you are meeting customer requirements. If you are building complex data integrations, online algorithms, or user interfaces, prototyping allows you to learn at a smaller and less costly scale. What’s more, prototypes can be shared early with your user base, which generates valuable feedback as well as excitement.

Prototyping requires somewhat unique skills that you will need to build and refine over time. Prototypers need to be able to move quickly, figure out new designs and technologies, understand user scenarios, actively solicit feedback, and not be afraid to fail. They need to be creative in their approach to solving problems, while still rooted in sound data mechanics. Why is prototyping better than wireframes or feature lists? Since prototypes are “real,” your users will give you better feedback; at the same time you will also understand some of the challenges you will face as you build the full-scale version. Building a strong prototyping capability will help you increase innovation and speed, while reducing the cost of mistakes.

6. Get great at sampling

Sampling will save you a lot of time if you learn how to do it correctly. There are many use cases for which sampling is an effective alternative to using full census (100 percent) data. Certain needs such as creating personalized experiences for each customer, or calculating executive accountability metrics, are not appropriate for sampling. But for many other needs, sampling is a viable option.

For example, understanding product or feature performance, looking at patterns and trends over time, and filtering for unexpected anomalies can typically be done on sampled data. One approach is to collect 100 percent of the data, but do most of your analysis on samples, and then confirm important conclusions on the full data set. Once you establish process flows to pull sampled data into standard tools such as Excel and/or SQL, you will see analyst productivity increase substantially, which will save you time and money and increase the job satisfaction of your analysts.

To get great at sampling, you need to do three things: first, develop standard sampled data sets that help your analysts address large swaths of business questions, updating them regularly; second, make sure you have at least one highly qualified individual (i.e. a statistician) who can ensure the data is being sampled correctly and results are not misapplied; and third, educate decision makers on the benefits and limitations of sampling so they can get comfortable making decisions with sampled data. The effective use of sampling increases productivity while delivering equivalent business value.

7. Ask for regular feedback

Big data is a learning process, both in terms of managing the data and in driving business value from its contents. Your internal user base is a valuable source of feedback and integral to your learning and development process. Your prototyping program will be a source of feedback, but you should also survey your users and benchmark your progress over time. Areas such as usability, data quality, and data latency are all categories within which users will give you feedback. In addition, you should ask for ad hoc feedback from every level of your stakeholder organizations so they see your commitment to making their business better.

As your data asset’s reputation grows, your stakeholders will give you more and better feedback, which will allow you to develop integrated goals and roadmaps, and drive more business benefit as a result. Regular feedback ensures your big data system is tightly integrated into business decision making, so it can play a lead role in business improvement.

Following the above steps will help you build more effective big data capabilities, saving you time and money, and driving maximum ROI for your business. The big data frontier is here; breaking through it requires an understanding of which steps will help you drive the most impact.

Chad Richeson is the CEO of Society Consulting, a Seattle-based analytics and technology consulting firm that provides business-driven data strategies, solutions, and analytics for its clients. Before joining Society Consulting in 2011, Chad spent 12 years at Microsoft driving analytics and big data solutions for Bing, MSN, Mobile and AdCenter.

Image courtesy of Flickr user Susan NYC.

  1. Good points. The first point, “Create a strategy” is right on the money. Putting aside the well published big data projects, a normal big data project in an organization has it’s own pitfalls and opportunities for success. Prototyping is essential, modeling & verifying the models is essential and above all have a fluid strategy that can leverage this domain … I have updated my blog “Top 10 Steps to a Pragmatic Big Data Pipeline”[http://goo.gl/Mm83k] with Chard’s observations.
    Cheers

    Share
  2. Good summary but i misses the already available solution of “in memory” bi systems that give most of above factors a quick oldie status !

    Share
  3. Hello Chad – Really interesting and insightful post. Here are my quick points on your 7 steps:

    1 – Completely agree with the alignment with business goals, which emphasizes the importance of collaboration between IT, analysts and the business community.
    2 – Agility requires the ability to manage complex and numerous analytical models that rely on lots of variables and massive data volumes. Not only in the creation of the initial model, but the ability to modify models as business factors change.
    3 – Good point about right time, because it certainly helps to load balance processing and analytics work based on the response needs based on moving everything to real-time.
    4 – Data quality is certainly key as is the ability to derive a trusted single view of critical master data, but it’s also interesting to consider analytics as part of the cleansing or discovery process, vs. just thinking about data quality as an enabler for analytics.
    5 – The prototyping recommendation is in-line with the reality of the analytics lifecycle – that is creating a sandbox environment that allows for the development of multiple models using an iterative approach.
    6 – It’s important to realize that one size does not fit all – depending on the situation, it may make sense to analyze the total dataset, which is now possible with advancements in in-db, in-memory and grid technologies. But I agree that sampling is a legitimate approach in some scenarios, then it’s all about getting the right sample. We are actually starting to use analytics up-front to create an intelligent, relevance filter to determine the most important information to include in the analytics.
    7 – Analytics and information is all about driving decisions, so delivery of those analytics to the point of contact is key. That means embedding analytics in operational systems – which increases the need for feedback and model monitoring and tuning.

    I’ve blogged about many of these topics at http://blogs.sas.com/content/datamanagement/

    Thanks,

    Mark Troester
    CIO/IT Thought Leader & Strategist
    SAS
    Twitter @mtroester

    Share
  4. Sadly, many businesses do treat data quality as a project. It’s like taking a shower at the beginning of the month and saying “Now I’m clean.”

    Share
  5. I would say number 4 is critical “Invest in data quality and metadata” without data it is realy hard.

    Share
  6. 8. Read Why Big Data Won’t Make You Smart, Rich, Or Pretty | Fast Company http://bit.ly/xwp5KF by @DanielWRasmus

    Share
  7. Using a data in our business prospective is more important that have simple data. Our key work starts when we are able to use certain data in a beneficial way for our business. One should not need to be data scientist to achieve this goal it only need a perfect business man.

    Share

Comments have been disabled for this post